I’m currently reading a book by Jeff Hawkins titled On Intelligence - which I’m looking forward to finishing soon. AS I understand, his premise is that the brain is not just a complex computer - the operations are fundamentally diffferent. Currently I’m in chapter 6 - yet, the book has gotten me so excited however that I decided to post this aspect of his theory before completing the rest of the story.
In his book, Hawkins creates what can only be described as an “elegant theory” of what intelligence is and how it operates. He calls this the memory-prediction framework. He describes the six-layer cortex and how it functions from its base level through to the higher functions and back again – forming a loop from the higher levels in the cortex (memory and creating prediction) back to the lower. Hawkins seems to suggest that intelligence can be reduced to the cognitive ability of memory and the associated cognitive ability to make predictions about the world – using incoming data. That once the basic data from our senses enter and arrive at the highest level of our cortex, memory and predictions from those memories drive us the rest of the way. In his book Hawkins makes this statement – “intelligence is the capacity of the brain to predict the future by analogy to the past.” As a layperson - this definition of intelligence seems clear to me. It just has that ‘look and feel.’ Thoughts?
A little side note: WIRED has given this book it’s 2005 Wired Rave Award. Here’s a little of what was written at the WIRED site in regards to the book’s inspiration. Hawkins was at his desk when he asked a simple question - (the rest is at the link under ‘book’ in the side menu) –
I don’t think it’s that simple. Consider the concept of object permanence, for example. If you present a young infant with a rattle, then repeatedly hide it away and bring it back, the infant initially has no idea that he/she is seeing the very same rattle each time. At some point in his/her development though, the infant begins to understand that he/she is only seeing one rattle, not multiple consecutive instances thereof.
This is not quite the same as “predicting the future by analogy to the past.” After all, the infant has no previous concept of object permanence from which to draw. Why should the infant think, “Oh, it’s the same rattle!” as opposed to “Oh, look! Another rattle, just like the one I saw last time”?
I was going to mention that as well. It seems to me that creativity is a key concept in human intelligence. Drawing analogies to past circumstances is an inadequate way to explain creativity, IMO.
It’s an interesting book. The trigger that led to Hawkin’s epiphany is Vernon Mountcastle’s 1978 paper Organizing Principle of Cerebral Function that suggested that since the biological substrate of the cortex was grossly homogenous throughout, the cortex must deploy the same generic algorithm to deal with all data (audio, video…) and motor output. The theory is elegant and sounds somewhat plausible to me.
Creativity also works via analogies. That’s what Fauconnier and Turner’s Network Model of Coneptual Integration explains. I’ve put up a list of links to papaers online, at Wikipedia.
Where I find Hawkin’s hypothesis lacking, is dealing with the emergence of distinct modalities and emotions. In the book, he brushes off emotions as a product of the limbic system, even though that’s simplistic and some neuroscientists specialising in emotions like Joseph LeDoux, believe that the limbic system doesn’t exist.
I don’t remember much else now. I read the book atleast 5-6 months ago. I’m probably rusty on the nuances.
While Hawkins discusses creativity in a later chapter — I haven’t gotten there yet – I add this at real risk of misrepresenting him. Since you’ve read the entire book - let me know where I miss the mark.
As I understand Hawkins at this point, pattern recognition, that is memory, is done by analogy. Certainly there is some merit to this idea. For example, I never look at the same face twice in the same way. The angle is always different. The face is always a llittle older or diffferent is som other way. Yet, I have no trouble recognizing that face as my sweet, god-fearing mother’s. I also have no problem recognizing a face as a face even though I never met the possessor of that face before – i.e. faces have eyes, a nose, a month etc. My brain holds an invariant form, a pattern, for the idea of ‘face.’ Not just my mother’s face but faces in general. Faces of dolls for example - or of dogs. All faces are recognized as such by reference to this. Therefore, the more patterns I’m exposed to that are called faces the more memory I have to draw on in that area. Yet – my experience doesn’t restrict analogy. Analogy is what prediction is all about. That’s why I can predict the location of the bathroom in resturants I’ve never been to before. NOW – if there is something usual about a face, for example, there is an eye where the nose ought to be – my attention will focus there. My prediction about faces hasn’t been met – an my attention is drawn. My recognition of this face – the one with the eye where the nose should be – is a creative act.
Creativity, input from the world, and prediction based on analogy from that input aren’t different animals. We don’t have to have an experience to make predictions. As such, as Hawkins states “creativity can be defined simply as making predictions by analogy” - we call a product creative when – according to Hawkins “…our memory-prediction system operates at a higher level of abstraction, when it makes uncommon predictions, using uncommon analogies.”
A note from the world of… creativity(?) When I was wondering as to what was truly new, I was trying to think of any invention that was completely without previous analogy. Then I thought of the story of the discovery of the benzine ring… a chain of molecules that shared bonds in a previously completely unconcieved manner. It was discovered through analogy in a dream about a snake with its tail in its mouth.
Then I thought of relativity. And the obvious analogy to a man sitting on a swivel chair and spinning round.
Then computation… boats… writing.
Writing may be a completely new invention. Not sure.
Actually, it sounds like a perfect description of it to me.
Don’t let your metaphysics get in the way of a phenomenal account when the phenomenal account is apparently sufficient.
I don’t believe the infant thinks either, actually, without the concept of “rattle” available in the first place which is surely the result of repeated stimuli. “Object permanence” is surely developed by repeated stimuli. Short of someone etching the concept on a blank slate, how else do you account for it? I don’t think it is fair to say that “object permanence” is somehow a unique concept in its own right rather than some sort of abstraction from primary sensation and memory. Do photons or electrons have “object permanence” when they are indistinguishable and “another electron just like the one I measured last time” is not distinct from “the same electron”?
The premise of the book seems like a restatement of Hume’s philosophy of the mind to me. One senses the notion here of ideas built up by impressions and memory.
But I’m also not clear how the book is suggesting this isn’t a computational task, by which I understand to mean “one incapable of being done by a computer” rather than “one we can’t currently emulate by computation.”
Again, I may be misrepresenting Hawkins here but I’ll take a stab at this —
If I understand your question — here’s what Hawkins says when providing a summary near the beginning of his book - “The ultimate defensive agrument of AI is that computers could, in theory, simulate the entire brain. A computer could model all the neurons and their connections, and if it did there would be nothing to distinguish the “intelligence” of the brain from the “intelligence” of the computer simulation. …. But AI researchers don’t simulate brains, and their programs are not intelligent.”
And here’s what I interpert Hawkins to mean. First, I think Hawkins would say that it’s not behavior or output that describes intelligence. Which, it seems, is how we judge AI but not necessarily people. For example, while reading we’re typically not “producing” much output but certainly there is quite a bit going on which we could agree is called intelligent. Not so for computers. For example, a computer program built on a structure of 0’s and 1’s can provide output that looks a lot like intelligence when viewed from the outside. For example Hawkins talks about the Turing Test which produced responses that were able to fool people into thinking they were communicating with another person – rather than with a computer. Hawkins uses this example to help explain what he means when saying that output alone doesn’t determine intelligence. Hawkins cites John Searle, a philosopher, who created the thought experiment called the Chinese Room. In that experiment we have two rooms, one person in each room, and only one chinese speaker. The person who speaks and writes chinese slips a question written in chinese characters through a slit in the wall between the rooms. In the other room sits the english speaking only individual – along with a thick book of instructions which described which character should be used to respond to each character sent through the slot. For example, if chinese character ‘A’ is received than respond with character ‘B’ – or if the context changes and if the ‘A’ is immediately preceded by a ‘B’ than respond with ‘C’ – These instructions are followed and the results are meaningful responds to the chinese questions ask or statements made. The chinese speaker on the other side, who received these chinese responses, must feel he or she is communicating with an intelligent, chinese understanding person on the other side of the wall. Of course they would be wrong – the person on the other side of the wall has no idea would was being asked or said in the notes passed to him – nor was he aware of what was passed back – or that it had meaning for those on the other side of the wall. In no sense is there “intelligence” going on in the way it is described when the word is attached to humans. Yet, this is the way computers ‘model the world.’ Programs are not “intelligent” in that sense.
Hawkins makes one other unconnected point later in his book that seems to me at least to expand somewhat on this idea – brains understand the world using analogy. Analogy using the memory already prresent. So, for example, Shakespeare’s metaphors are understandable to us when he writes – “Love is a smoke made with the fume of sighs.”
In the same way IMO – Steven Pinker talks about computers and the limitations of computer understanding when he asks why computers have such a problem understanding the differences in the word “ring” when used in sentences like *“Ring around the collar” – or “Ring the bell” — “Ring the city” — “Engagement ring” – “bathtub ring” and on and on for other words and examples. The brain sees these differences and understands those differences almost immediately –
I don’t agree with Hawkins on this score. Ultimately, behaviour is all that matters, atleast from a 3rd-party perspective. For all I know, the tree outside my building could be pondering metaphysics all its life. Fat chance it will engage in communication any day soon. Of course, behaviour need not be manifested immediately. One is not expected to master the essence of mechanics, 10 minutes after you leave the last class of the semester. But the only tangible manifestation of intelligence is behaviour.
Of interest might be this (short) book review, which also includes a bit on John Searle’s “Mind: An Introduction”.
It seems to me that there are a couple issues here. First, I’d think that memory and analogy are necessary but not sufficient for intelligence. What about emotions? What about reasoning?
Second, it seems to me that neither memory nor analogy are well understood enough to be able to rely on them as a definition for intelligence (analogy less so than memory). Pattern recognition, such as identifying a particular face as that face, despite changes in angle and such, is not analogy.
Interesting though. I look forward to reading this book at some point.
This book is in my queue. I keep saying it is next, but something else always jumps in. Now I’m even more interested, I’ll have to make sure it jumps to the front of the queue.
I think I’ll stick on the side of the Turing test over the Chinese Room for reasons elaborated in previous threads on the Chinese Room. While I won’t go so far as Gyan and say that behavior is all that matters from a third party perspective, that is definitely the direction in which my sympathies lie.
Judging from Hawkins’ statements, I doubt he sees emotion as a “necessary” attribute for the definition of intelligence. For example, truly “intelligent” machines would not need human like qualities to be considered intelligent. Intelligence requires only a method for sensing the world, a memory of those sensations, and predictive abilities. Speaking for myself, the totality of human behavior, the sort that evolved to keep us alive and breeding, doesn’t necessarily equate with the term intelligence. Emotions like being jealous, feeling greed or envy might be beneficial when winning a mate or winning the next meal (insuring both the survival of our genes and our bodies) but I don’t equate that behavior with intelligent behavior any more than I would when a tree sends sap to cover an opening in its bark. As Hawkins explains – intelligence isn’t …the emotional drives of the old brain – things like fear, paranoia, and desire. Why? because we can imagine intelligent machines that …will not have these faculties. They will not have personal ambition. They will not desire wealth, social recognition, or sensual gratification… While these qualities might be of supreme importance when insuring that our genes live on - they are not important to intelligence per se.
So — I can certainly understand how emotion, fear for example, is a pretty good primative response when the animal in question may not reason that well. For example, a mouse with its amygdala removed will lose fear of cats. The mouse will not flee from cats. (The amygdala is the part of the brain primarily responsible for creating the emotion fear.) Contrast that with a person with his amygdala removed might lose his fear of tigers but will still be smart enough to stay away from tigers based on what he has learned about those potential interactions. (I’m assuming here)
The emotion fear can be described as a form of inflexable evolutionary intelligence – but so can any reflex. Plenty of animals use relatively simple – and seemingly intelligent – methods for locating food. A hydra, for example, has no brain but can move itself based on the location of food. An ‘intelligent’ act or a reflex? These sort of reflexes that don’t receive instruction from the brain (a hyrda doesn’t have one) – and we, even with our brain, also have similar reflex actions – like removing our hand from a hot stove. (To save time and further injury, that response comes from our spinal cord.) So in regards to emotion, while it may help with survival, especially in animals without the quality of predictive abilities that human seem to have, it doesn’t seem “essential” to intelligence in itself.
In response to your question about reason — what is reason if not making predictions based on experience and analogy?
First I do think it is analogy to a small extent. And to greater extents, as you drift away from what is regonized as a face - (ie. Two eyes, one nose, a mouth etc…) As discussed above, with creativity, what’s recognised as creative is what we do all of the time - only when its done to lesser extents do we stop calling the result creative. For example, Hawkins might ask why looking at a problem in a different way, rearranging a problem, way will call up different comparisons from memory – that is, different memories that have some similarity / analogy to the problem at hand. Yet if these recognitions are not by analogy, if not by comparison to some memory, what would they be than?
That’s fine - I really not trying to win souls here but I do find Hawkins’ idea compelling (based on what I presently know).
For others – this does call to mind a comparison Hawkins makes in the book. The comparison of Deep Blue and Gary Kasparov. Recall that Deep Blue ‘defeated’ Kasparov but were Deep Blue’s actions in doing so “intelligent?” Note that Kasparov defeated Deep Blue once – and he defeated a machine that makes about 50 billion calculations of possible moves in three minutes. Considering the time allotted during tournament chess – Kasparov, being human and time constrained, couldn’t have had time to make these sort of calculations (the brain isn’t fast enough). Clearly Kasparov had to have made certain predictions that allowed him to ‘never consider’ some moves, and focus on others. That’s a intelligent, not programmed, behavior. I just don’t see how Deep Blue was acting intelligently when it beat Kasparov. If it was acting in an intelligent way - why would Deep Blue consider even the most ridiculous moves before deciding on the ‘best’ move? In fact, I doubt Deep Blue had any concept of ‘ridiculous’ moves – or ‘average’ moves verse ‘better’ – or ‘better’ verse ‘best.’ Deep Blue was number crunching – by the billions - without any other idea of what was going on. An extemely fast abacus is not intelligent - as I understand the word –
On another point — I also want to add this about the brain and memory and how that seems to correspond with Hawkins’ memory / prediction model of intelligence-----
As we all know, we are in possession of both short-term and long-term memory. But how does short-term and long-term memory fit into Hawkins’ model (as I presently understand that)? One exposure short-term memory, memory which lasts for about 20 seconds or so, is the memory that is compared by ‘prediction’ (e.g. I casually notice the top of someone’s head and I unconsciously predict the unseen face and body below)– Hawkins says that we only pay attention if there is something that doesn’t fit the prediction. That is, the top of the head pops up and there’s no face. While long term memories are typically short term memories repeated (whether by stimuli either inside or outside the brain) – this sort of experience would be converted to long-term memory because of the constant recall we would have of that harrowing experience. The brain doesn’t distinguish between internal and external stimuli as I understand. So repeated memories will create a new model from which to ‘predict’ the world. For example, I will focus on the unsual, and even replay those events in my mind, and exclude the mundane. The mundane already being modeled to make accurate predictions. All this makes sense in Hawkins memory-prediction model of intelligence. Predictions always - but predictions based in past experience (memory) -
Yes, it’s apparent that that is Hawkins’ view. Having not read the book, I can’t really comment on it; perhaps he’s really got something to say. And I agree that it’s unnecessary to equate “human-like” qualities with intelligence (although that raises the question of a general definition of intelligence; it seems to me that, as ambiguous and unsatisfying as his “test” is, Turing got it right by making it a functional, subjective definition). On the other hand, if, as you say, intelligence requires only “a method for sensing the world, a memory of those sensations, and predictive abilities”, then why are current machines (e.g., a thermostat at the extreme) not intelligent?
As to emotions, I need you to supply a better definition; certainly, there is more to it than fear, ambition, paranoia, etc., which are pretty complex. Let’s put it this way – there needs to be some method for deciding what is important (surely a prerequisite for considering something intelligent); for example, in AI, you run into the frame problem. In my mind, any mechanism that can be used to assign importance to actions or objects – even something as basic as pleasure/pain – is a part of what many people (not me, necessarily) would call “emotion”. To your example of “fear” – certainly, you’d concede that fear is not limited to reflex actions (e.g., trepidation, foreboding, etc). Now, perhaps you might claim that “higher” levels of fear are nothing but analogy, but I don’t think that’s what is meant here. Perhaps I’m wrong and the view of analogy that is being proposed really is that all-inclusive. But, I’d point out that, at some level of abstraction, terms become meaningless. I might as well go up another level and say that intelligence is nothing more than interactions in space/time. (“But that’s just dumb”, I hear you say. Yes, it is, and it is exactly my point. At that level, it tells you nothing about intelligence.)
Logical inference. In my mind, referring to a syllogism (much less induction or abduction) as “analogy” is misguided.
IMO, you abuse the term “analogy”. “Comparisons of memory” (that is, pattern recognition) is not equivalent to “analogy”. Given two 5x5 grids of black and white dots, where only one dot is a different color between the two, saying that an algorithm that can equate the two is performing an analogy is overreaching. Look, my only objection here is to equating “pattern recognition” and “analogy”. Again, I agree that analogy and memory are prerequisites for anything we’d call “intelligence”. As with the frame problem, it’s a matter of including a mechanism that decides how two (very) different things are similar that is the problem. Lumping everything in under the umbrella term “analogy” doesn’t really say much.
And once again, I feel the need to reiterate – I’m sympathetic to and interested in his views. I’ll read this at some point. For another take on analogy, I’d suggest you check into Douglas Hofstadter’s work on copycat and his book on the same topic.
A slight nitpick – that’s what heuristics and search pruning are all about. A (good) chess program does act in an intelligent way, limited by the representation it uses.
Don’t get me wrong, I find it compelling, too. I just find no solace in Searle. To me he seems like a man who knows what conclusions he wants to reach before he has derived them, at least in the case of the Chinese Room.
Actually, it had to of had an “idea” of a “best move”, as you can tell from the fact that it made one. But I think you are too hung up on the semantic baggage “intelligence” carries with it. No, Deep Blue would not pass a Turing Test, so those who prefer that formulation of intelligence wouldn’t call it an intelligent device necessarily. But for that matter, a dolphin, whale, cat, dog, or elephant wouldn’t pass a Turing Test, either, nor could they play chess. Let’s not get too bogged down in computation-as-number-crunching.
I think this is highly contextual. A machine which anticipates what will be expected of it and responds accordingly will begin to exhibit what I understand to be intelligence. But here, of course, you see why I like the notion of analogy, etc, in the activities of an intelligent mind—I already think this is the proper explanation anyway. Probably because Hawkins’ ideas, and people who have similar notions, have found their way to me, rather than anything genuinely new on my part. But then, if he’s right, it should go without saying that it won’t be genuinely new… DigitalStimulus:
I think you presume an awful lot about how pattern reconition takes place. To my knowledge, the mechanism, as much as we understand it, is not as analytical as you present it here.
Sorry about that – I was mixing up my internal thoughts with the topic of the thread; I wrote much of that post from an AI point of view and had to go back and remove a bunch of stuff. On reading the above point, it became obvious to me that I missed that bit. Human processing is in no way as clear-cut as what I put forth. Thank you for pointing out my mistake.
However, I do still stand by my objection to equating “pattern matching” and “analogy”, in addition to my overarching objection of reducing “intelligence” to “analogy and memory”.
Ugh. You know what? Looking at the thread title again, I realize I made another mistake…it’s not “analogy and memory”, it’s “prediction and memory”. Dadgummit, I hate it when that happens.
At any rate, that just shifts my objection from the underspecified “analogy” to “reasoning”, as I mentioned earlier. I find it difficult to accept that syllogisms and the like are predictions. In fact, can doing a mathematical proof (or any other type of math, for that matter) really be considered making a prediction?