Intelligence (All Prediction and Memory?)

Hey I bought the book and read a fair part of it waiting for him to get to some good stuff before I quit off … so I may not be doing a fair exposition of it. And it’s a fairly quick read, I just couldn’t stomach his droning. But then I come from a cognitive neuroscience POV so some of the philosophical conundrums seem a little vaucous to me.

But I do agree that the question of dualism vs materialism is nonsensical at this point, that consciousness is indeed a systems property, and when cognitive neuroscientists look for neural correlates of consciousness they end up generally refering to consciousness as a function of a loopy heirarchy. Hofstadter really had that as his main point, that consciousness was a function of those strange self-referential loops where self included self as an object within its set of items to be updated. Others today take similar positions: Grossberg’s application of his Adaptive Resonance Theory to state that all conscious states are resonant states (in which a feedback occurs between bottom-up perceptual inputs and top-down expectations); Patricia Churchland’s view that “self-representations may be widely distributed across brain structures, coordinated only on an “as-needed” basis, and arranged in a loose and loopy hierarchy” and, in the same article (Science April 12 2002), supporting Damasio’s view that “a brain whose wiring enables it to distinguish between inner-world representations and outer-world representations and to build a meta-representational model of the relation between outer and inner entities is a brain enjoying some degree of consciousness”; to Christof Koch’s work illustrating that conscious perception of visual experience requires an interaction between top-down and bottom-up contributions rather than occurring at the top.

That “incorporating one-self into your own mental model of the world” is no “mere” anything.

If indeed those strange loops can be proven to be the system requirement for consciousness, with the degree of intertwined loopiness correlating with the degree of conscious experience, then any system with such an organization could be positied to have some degree of consciousness. Interestingly this opens the door not only to neurobiologic systems in which the component parts are seperated by small distances and information travels in less than small fractions of seconds, but to computer intelligences, computer networks, and networks of individuals in social structures as well (!) even though the time course of information transmission may be many orders of magnitude slower.

Is consciousness required for intelligence? I do not think so, for indeed Deep Blue is able to achieve solutions to novel problems within a very specific domain. But once an intelligent entity develops a concept of self vs nonself and includes itself as a player in its analyses, consciousness must arise. And with that the ability to flexibly alter what is a salient goal.

You (and other-wise and whoever else is interested) might want to look into Lakoff and Johnson’s work. It’s all about the origin of analogies (at least, what little I’ve read of it). I’m still working my way through Philosophy in the Flesh, which got awfully tedious somewhere before the middle. A friend of mine recommended that I skip to the back where they go into an analysis of their ideas. It’s my understanding that they’ve also published something on the origin of mathematical thought.

Nonsensical why? Just want to check if you think it’s unprovable (akin to proof of a higher being) or if it’s just that one or the other (most likely dualism) is dismissed as being nonsense.

Interesting. I don’t really consider separating the two. Let’s see…so you’re saying that you would consider a purely mechanistic process with no sense of “self” intelligent? Does that include a purely deterministic entity? I can’t quite make that out…it seems to me that some notion of self vs. non-self is a prerequisite for anything I’d consider intelligent. And what counts as “a concept of self”?

I agree. The only other explanation is if Hawkins would respond “well of course to be able to predict you must be able to self-model.” But that would be a pretty broad definition of “prediction.”

You didn’t ask me but I’m going to through in my $52,385.98 of Social Security moneys (translation, 2 cents): Because I can imagine a continuum of brain function from bare bones to humans, it seems arbitrary to pick one of those points in the continuum as intelligent and everything below it as not. It almost forces one into scrapping the word “intelligence” and viewing it all as a continuum of brain function capabilities.

Agreed. The word choice of “merely” was an attempt to point at that “awareness” was not due to magic, but rather a by-product of a brain function.
Questions for Dseid, Digital Stimulus, or anyone:

  1. Do you think we can have intelligence without goals/motivation? I can’t imagine an artificial intelligence doing anything intelligent w/o goals and motivation. How else could you align it’s internal processes to produce output without clear direction?
  2. Do you think a more hostile environment would cause quicker evolution of higher intelligence?
  3. Are we limited in our intelligence by our enviroment? When I read the book Alien they mentioned the Aliens were 10 or 100 times smarter than humans. As I pondered that I tried to figure out what evolutionary pressures would cause that.

Dig, I use, as the only useful definition of intellignce, that intelligence equals the solving of novel problems in pursuit of salient goals. So here Raft’s questions are pertinent. Does a non-conscious entity be said to have salient goals? Does Deep Blue have a goal of achieving checkmate or of achieving the various tactical positions that are in service of that end?

I’ll return later for the other questions but my roadbike beckons …

Minor correction – it’s not analogies, it’s metaphors. Although, in this instance, I’m not sure there’s a substantial difference. It seems to me that the ability to perform one implies the other. I read a bit more before going to sleep and they have a fantastic explanation of prior philosophers’ ideas from a metaphorical standpoint.

While I agree there is a continuum of intelligence, I think there is a need to delineate boundaries (but perhaps I’m wrong and it’s unnecessary). You say “brain function”, which implies that lower animals (those without a brain) are not intelligent. That seems like a fair cutoff line, if one must be drawn. Although it does classify snails and such as intelligent.

No, I don’t think so. Although I think it’s important to clarify “goals”. I assume that they are internally generated? I mean, under that definition, any implementation of the STRIPS algorithm (at least during execution) is intelligent. And you might agree with that, but it seems to be lacking something along “necessary but not sufficient” lines. As with snails, it seems to underestimate what most people mean when they use the term “intelligent”. But I’m OK with that, so long as it’s clear.

Not necessarily. Evolution is a harsh taskmaster; the benefit of higher intelligence would have to be greater than the cost. I’d think that a more hostile environment would generally affect physical characteristics before mental ones. It’s a neat question to ponder though; what type of environment would focus on the mental rather than the physical?

I’d think not, at least not directly. Although, I suppose, “environment” might include anything that exerts an effect. I guess my answer is that intelligence is not limited by environment, but its development (evolutionarily speaking) would be dependent on it. Again, a neat question to ponder.

If I felt the need to argue against your position, I’d say that Deep Blue doesn’t encounter “novel problems”; there is no situation that can arise that is not well-defined.

I’m not sure what I think of the given defintion, although it seems acceptable. But again, I’d have to be clear on what “goals” and “novel problems” are. As I replied to RaftPeople, it seems to me that intelligence requires goals. And it seems to me that goal formulation, at least as generally used, requires a sense of self (I almost said consciousness, but that’s overkill). At any rate, I’m fine with leaving definitions fuzzy and working them out along the way, so long as there’s benefit to doing so.

I think the problem I have with dualisms v monisms is that they do not explain why we, if we are dualistic, have exclusive rights to the mental domain. Assume there is mind and matter, as two separate things—what does this say about “artificial” intelligence? So far as I can tell, nothing.

The book sounds like a good read - thanks for the recommendation.

I don’t agree with the definition of intelligence as being purely memory-prediction.

The ability to conceptualize and to interpret the present are equally critical.

Perhaps the generalization is one of pattern recognition.

Materialism vs. Dualism. How I read Searle is that reductionism is insufficient to understand a sense of self and of consciousness if such are the result of system dynamics. It would be like trying to understand how water behaves in a vortex by analyzing the nature individual molecules. A vortex is not reducible to individual molecules (see Chaos Theory). And similarly just as the rules of fluid dynamics applied water are identically applicable for other fluids made of other molecules even if they vortex is different based on their individual nature to the specific, so should the rules of “sense of self” and consciousness be the same even if the component parts are not neurons but elements of a computer program or members of a large group. What matters is how the bits are organized and how they interact.

Do you need a hostile environment? Nah. You need an environment in which intelligent behavior has a selective advantage worth its costs. If an entity is presented with the same sort of problems all the time and the same sorts of responses quickly made will result in survival to the point of self replication, then the slower nature of intelligent analysis (as opposed to reflex response) and its metabolic cost would be a detriment no matter how hostile the environment. Intelligence is just one strategy and certainly not the most successful. It has huge costs. Bacteria are by far more successful at dealing with hostile environments than any intelligent creature. But you do need salient goals, and surivial certainly makes a goal very salient. An environment in which organisms are presented with multiple novel problems and the ones which solve them survive better and reproduce more succssfully will result in a more rapid development of intelligent behaviors.

That’s 2 and 3.

Deep Blue does indeed deal with novel problems - it is presented with specific board positions that it has never analyzed before and deals with them better than most humans. It is a limited domain but still, within that domain it is very intelligent. In fact it makes analogies from past experiences to predict future success!

I think that we need to be careful in how we define intelligence to not be entirely self serving. Only human arrogance has us defining intelligence as behaving like us. Intelligence is solving novel problems salient to the entity. If that is keeping track of where individuals are in a volume of water some many cubic miles large (whales) then that is intelligence for that creature, and in that domain we are quite the slobbering idiots. A machine intelligence may be able to solve many sorts of novel problems salient to it better than we can and we’d say it is unintelligent because it cannot solve problems salient to us and doesn’t sound or look like us. And an alien intelligence? We wouldn’t recognize it if we met it if its salient goals were very different.

I’m not sure how that gets away from materialism and dualism. On a different tangent, as enticing as it is, I’ve also never been convinced that emergency in complex systems (I’m taking that to be the opposite of reductionism) holds water. What I mean is – sure, there are “system properties”, weather, vortices, etc. But they always map onto a lower level; you can’t have them without the lower level. Yes, different configurations in state space will yield similar (if not the same) system property, but that’s just aggregation. (“Just” is a little more dismissive than I really mean.) Now, don’t get me wrong…I take it as a given that the process the components (or constituents) are involved in is critical. Without change over time, you got bupkus. I’m just not convinced there’s justification for reifying system properties into entities. They’re useful abstractions, but I hesitate to grant them independent existence.

I wonder…it seems like an odd use of the word “novel”. I mean, no board position can arise that it doesn’t have rules for. And again, I’d have to object to the use of “analogy” here. Although perhaps, not being familiar with Deep Blue’s internals, I’m wrong to object.

Dseid: “I think that we need to be careful in how we define intelligence to not be entirely self serving.”

At first I was against calling Deep Blue intelligent, but then my own recent post on intelligence as a continuum contradicts this and you have a good point here also. I think, at times, I was combining intelligence with self-awareness (when I was ranting about the Chinese Room), and at other times I was closer to your definition.

I think I am firmly in the camp that I don’t even need a clear definition of intelligence, but if I/we can mimic these brain functions (or at least some of them, for now) in a computer, then that’s pretty exciting and the definition can just sit there, unused.

Tangent, prediction (maybe obvious, but I feel the need to say it):
As researchers continue to make progress in specific areas (visual processing, voice recognition, learning, etc.) I can see companies specializing in these areas and selling “modules” of pre-configured neural networks designed to handle certain types of problems. As time goes on, we will combine these separate functions, and continue to build layer upon layer, just like our own brain probably evolved. Eventually we will have something similar in function to our own brain.

I agree.

In addition, unless I don’t understand dualism (which I will freely admit, I am not well versed in philosophy terms/literature etc., but I do think about and analyze how things might work in the world quite a bit), it seems like dualism requires some unknown something going on and I just don’t see a requirement for that.

In my opinion, emotion is important in memory formation, classification and recall. I think emotional importance/significance is often a large factor in whether a short term memory becomes a long term memory. Then there is classification, surely you have “good” and “bad” memories? Then there is recall. If I have 121 memories about a person which are the strongest? The ones with the strongest emotions/emotional significances attached to them. If I am making a decision about that person, which do I call up first? The strongest ones. I am not sure how this relates to recalling past thinking about a person though.