Having just re-read my response to Tigers2B1, I think I owe you an apology. Wow, that came off as aggressive. Sorry about that.
The analogy issue is what enables prediction. Everyone usually recognizes that individual instants in time (say, think of a photograph) are unique. There is little sense in matching patterns if we are not presented with patterns as such. When we loosen the requirements for “sameness” we leave the realm of analytical “pattern matching” and enter the realm of… analogy. I don’t believe it would be simple to flesh out some kind of bright-line test for when something was similar enough to be a pattern match instead of an analogous construction, but one wonders whether the processes involved in recognizing an object one has no prior knowledge about as “a book” are not based on something we would consider, in everyday terms, an analogy rather than “pattern matching.” I think the insight we need here was subtly presented by II Gyan II in post #6 above. But for an intuitive approach, think about what happens when you recognize something; more specifically, note the distinct lack of “characteristic features” that seem to be demanded by an analytical approach: instead of picking up hallmarks and piecing them together, instead it is as if the entire truth of the matter presents itself “all at once.”
I may be missing the mark here, my own biases in accounting for experience may be showing, but that’s what I’m picking up from the thread so far…
As already noted in my posts above – intelligence can be defined to include the behavior of a hydra – or even the cells in our bodies if we wanted to go that far – it all depends on how inclusive you want to be. A hydra makes ‘predictions’ about its environment when ‘searching’ for food. All done without a brain. I’m sure there are machines that can perform similar feats. My heart is beating right now -and I hope as you read this – if not, a machine can certain be employed to do the job - for a time. But are these truly intelligent behaviors or automatic or controlled behaviors?
Specifically to your example – I’m not certain where you’re going with the example of a thermostat (I tried to anticipate above) – but a thermostat measures – it doesn’t predict. It measures without the memory – a memory which is called back to form predictions. Predictions that, in turn, have the potential to flex with incoming data. That is – make brand new predictions. A loop. Predictions with a purpose that determine our behavior - our world view – with our world modifying, always, our predictions.
I’m not referring to logic when I refer to analogy - that is prediction by analogy. And I’m not referring to logic when I referred to your use of the term “reason” in the post above. I may be in error here – but as I understand this, syllogisms, while logical, don’t have to be reasonable. Reasonablness is what corresponds to our experiences. So that the syllogism - All humans can fly, Socrates is human, therefore Socrates can fly – isn’t “reasonable” because the major premise isn’t reasonable — that is, it’s contrary to experience. And experience is what drives prediction and memory. So we’re back to prediction and memory it seems to me - if stuff is to make sense. In the example of Socrates – IF the major premise were determined “reasonable” – and in this case that would be done by observation (sensation) and memory – than the rest, which logically follows, would be reasonable also – Prediction based on analogy, while it might be thought of as a “reasonable” response to the world - is not the same as a logical syllogism IMO - Of course, I could just be getting myself all confused here.
AND
Is there an intelligent designer for our universe? Maybe – but not necessarily.
IMO intelligence isn’t necessarily determined by outcome.
Considering ‘chinese room’ thought experiment mentioned above – it appears, to me at least, that one can respond with meaning – and still have no idea of what’s going on ---- So – I suppose you understand why I disagree with the idea that Deep Blue had an “had to have an idea of the best move” or even “anticipated” anything - as stated –
OK, so I think there’s some miscommunication here. I think the issue is that Hawkins is putting forth an idea of what intelligence is. I’m saying, look – if that’s how you define it, then you have to include this and that. My point being, memory and prediction alone are too simple.
As to the above quote: First, the thermostat was purely an extreme example simply to refute the statement “Intelligence requires only a method for sensing the world, a memory of those sensations, and predictive abilities” (bolding mine). It’s more than that. Second, I don’t think one can reasonably define intelligence to include hydra, nor thermostats. Furthermore, it borders on trivial to design a machine that uses prediction. Almost anything that makes use of a Kalman filter, for instance, a missile guidance system, uses prediction. It has a motion model and an observation model, using probability calculations to predict location, updating errors along the way. An engineer could add such a mechanism to a thermostat, although it would be overkill. Nonetheless, the thermostat would have a prediction component. It wouldn’t be intelligent.
Again, I sense a miscommunication. I was responding to your question, “what is reason if not making predictions based on experience and analogy?” My answer is – logical inference. I’m not really sure what “reason” could possibly be (note that I’d not conflate “reason” with “reasonableness”) without logic.
It’s possible the authors statement is correct, although I think 2 diverging thoughts:
- It’s even simpler than that, intelligence is merely input/operation/output (see the last portion of my post)
- Our level of intelligence seem to require functions that, when described abstractly, go beyond those terms (even if underneath it all they are built on those basic foundations)
Here are some of those other functions:
Association (as part of memory)
Abstraction (as part of prediction, which, to me is not obviously included)
Creativity (as part of prediction, possibly this is merely analogy at a high level of abstraction, or it has a randomness to it, who knows)
Motivation (can’t solve a problem you don’t care about)
Other random thoughts:
Describing Our Brain in Simple Terms
I think the danger in using these “simple” explanations is we attempt to use our words to compartmentalize various functions in the brain that are not necessarily distinct and able to be compartmentalized. A neural network is great at finding local minimums, that’s what it does, and our brain does a whole lot of it. Sure, we need to use our words to better understand the brain, but we should be careful of an explanation that is too simple.
Chinese Room
If you write a program that is as un-intelligent and un-like our brains as the Chinese Room thought experiment, then yes, that program should not be considered intelligent. But the model is flawed (where is the internal motivation? where is the ability to solve problems to fulfill the internal goals? etc.), so I don’t think it proves anything.
Blue Gene
Not intelligent (at least not by our standards), just programming.
Emotions
It seems like artificial intelligence will require some form of these for motivation, etc.
Steven Pinker and Computer’s Problem with “ring”
Once again, if you design a crappy program that is not intelligent, then all you have is a crappy program that is not intelligent, you haven’t proven that computers can’t be intelligent. The “ring” issue he spoke about is all about context, and to be honest, this type of problem is exactly where neural networks shine (producing appropriate output based on processing all of the current input simultaneously, not like traditional algorithms), so I don’t think this one holds much water (possibly it did before the power of neural networks was discovered back in the 1950’s).
Algorithm
I think this term is a poor term for describing what our brain does. It implies procedural steps like our current programming languages, but neural networks do not operate this way.
One Way I Like to Think About Intelligence
I picture a creature with only a few brain cells attached to input (senses) and output (muscles/motors/movement) fighting to stay alive. If the combination of those brain cells helps this thing stay alive, then we have intelligence. Even if we randomly configured it, and half of the reason it stayed alive was due to the particular circumstances that surrounded it, then we still have intelligence, just that it’s optimized to that particular environment.
Now add more brain cells and continue the process.:
Maybe with 3 brain cells all it could do was detect food so it knows where to go (“knows” means that’s how the brain cells respond to that situation, nothing more).
With 30 brain cells, the ones that stay alive are the ones whose brain cells make them move away from danger.
With each increase in cells and connections we get more functionality, including, eventually modelling the world around it, then prediction, etc. etc. etc.
My point is that our self-awareness could easily be just another one of these advanced functions, in which case I don’t see any reason why a computer couldn’t also become self-aware.
Here’s an old thread on the Chinese Room and the strong AI position. Good read.
I’ll check it out, thanks.
Boy am I sorry that my life right now is so busy that I can only briefly contribute to this thread. Several small points …
Many of you may remember Douglas Hofstadter of Godel Escher and Bach fame. He went on to specifically work on computer models of analogies in The Fluid Analogy Research Group. His book, Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mchanisms of Thought, published in 1995, begins with the understanding that pattern finding is the core of intelligence. Unlike Hawkins he does not treat this as some kind of brilliant epiphany. Instead he goes on to computer model analogy making in very specific microdomains in order to better understand the means by which we do so. An interesting read.
Combine this concept, if you will, with the thoughts of Peter Gardenfors in Conceptual Space: The Geometry of Thought (2000) which describe ideas as quantifiable multidimensional conceptual spaces. Think, as an example of a conceptual space, of the color spindle with the color wheel in the plane and white and black as the top and bottom of a spindle emerging above and below. All concepts can be described in n-dimensional geometric terms with varying degrees of tightness of fit required or not required in various dimensions to allow for concreteness/specificity vs abstractness/generality along any individual dimension. These dimensions are not limited to those of lower order peceptual input (like color), but include dimensions learned by experience, such as functional or social significance.
When you put these together you discover that creativity is the process of geometric manipulations of conceptual objects: transformations; rotations; translations; etc. All of which is computationally reproducible once the metrics are defined. (Just use the math of geometric transformation of n-dimensional spaces.) To put that into simpler English, creative ideas are taking one concept and translating it, by way of analogy, to a novel domain, enlarging or reducing it, rotating it, stretching it if needed, and finding an unexpected good fit. The rolling log is placed under the sledge and becomes a wheel which is applied to the domain of thoughts about the heavens and wheels within wheels are created which are transformed into ellipses and the solar system. That concept is translated into the domain of determining atomic structure and the concept of electrons orbiting a nucleus is created, and later transformed itself and modified. And so on.
Analogies are geometric transformations in the service of solving novel problems in pursuit of salient goals. And that is what intelligence is all about.
And if anyone wants some serious reading, the best modeling of brain function and learning is done by Steve Grossberg at BU. Look specifically at: Grossberg, S. (1999). The link between brain learning, attention, and consciousness. Consciousness and Cognition, 8, 1-44. Preliminary version appears as Boston University Technical Report, CAS/CNS-TR-97-018. Available in PDF (Gro.concog98.pdf) (336Kb) and postscript (Gro.concog98.ps.gz) (761Kb). (linked to on that site) for an overview.
I tend to agree with these ideas.
And I like your point about not treating it like a brilliant epiphany. When I read the blurb about Hawkins my initial reaction was, “ok, yes, I’ve come to similar conclusions regarding a key part of our intelligence is modeling the world around us inside our heads, but I didn’t think this was news to anyone” Then I thought maybe I was being a little critical.
I have given this stuff much thought because it has always interested me but it is even more relevant because of a current project.
[Tangent] (current project):
I always liked the idea of creating artificial intelligance (I know, so does half the world), but I finally decided to work on it recently after reading an article in Scientific American about recent developments in understanding the creation of short and long term memories.
So I set out to create a program with creatures in an environment and they would get a neural network brain but with some differences from standard ones. Instead of a binary on/off neuron firing model, I wanted to model the more complex wave functions of the neuron, and also create a wave function for the activity at the synapse (inbound into the neuron). The reason was to potentially “discover” “brain” functions that performed better/different than using the binary model.
While working on the design of the creature I realized the following:
- For creatures to learn anything they need a hostile environment so that learning pays off (keeps them alive) otherwise no advantage to learning. This, by the way has raised a few questions for me, like, is this true? And, would a more hostile environment evolve intelligence more rapidly? Would a more hostile enviroment create intelligence beyond our intelligence? Is our intelligence limited by our enviroment? (No need to be able to solve quadratic equations in your head, so, for the most part, we can’t).
- Infusing the creature with a “will” to live is not trivial. Sure, I could program in some artificial control to the creature to stay alive, but I wanted the brain to do all the work and not be influenced by my programming in any way. My conclusion was that the “will” to live is either what you get when your brain cells are configured such that they avoid danger and go for food, or it is a by-product of higher level intelligence. My only choice is to create a bunch of creatures and through offspring and genetic algorithms, the ones that survive, by definition, will have the “will” to live.
Current status: Just got the physics of the world working so me and my kids can see this stuff in action on the screen, phase I of brain is ready to go (phase I=“learning” only through evolution, meaning baby creature begins with brain similar to parents configuration, phase II=incorporate learning algorithms during life span).
[/Tangent]
I respond here because that thread is so old, but I really get excited about this stuff and my kids sure don’t want to here it.
Well I read through quite a bit of the thread and it did help clarify something:
Whether he knows it or not, Searle’s thought experiment really is just an argument against the Turing test, nothing more, nothing less. He talks about consciousness and understanding and he is right, the man does not understand. So what. It doesn’t mean you couldn’t create something that did. But the model he describes is certainly not the model I would use (and am in the process of using) to create intelligence. Now don’t get me wrong, I don’t think I am going to magically succeed where others have not, my goals are small and doable.
Here is where I think he is missing something:
Understanding comes from associating input and output over time. We have senses that take in input, we get internal input from other sections of the brain, we produce output and then we get more input, all of this creates associations of input with output. We feel cold, so we move towards the most heat, we feel warmer, we associate the movement with the sensation, we do this in billions of ways, and over time we create an internal model of the world which includes our own body/mind, we notice that these arms always go with us, that we can control them, etc. etc. etc. The man in the Chinese Room has none of this. Just a lookup table. No ability to make new associations, no ability to understand that the word tree means visually a certain shape, smells a certain way, feels various ways, hurts when you fall out of it, great for fuel in the winter, will support a fort and a bunch of kids, sometimes get’s catepillars that make nests covering the leaves, sometimes get hit by lighting, are heavy and can destrouy houses when they fall, etc. etc. etc. etc.
Basically what he is saying is that a slide rule (lookup table) is not intelligent and I would say back to him, “and you spent how much on your Phd to come up with that flawed analogy?”
I think his argument is something more than that. In particular, I think it tries to refute the computational model of mind – that it is impossible for consciousness/intelligence/what-have-you to arise from symbol manipulation. He’s not the only one that feels this way; look at Fodor’s recent objections in “The Mind Doesn’t Work That Way” regarding abduction. (Note that, at least as I understand it, Fodor’s position isn’t against AI per se the way Searle’s is, it’s just arguing against a pure symbol manipulation system. I came across Pinker’s response to Fodor’s book at another time; one of his points is that Fodor misrepresents Pinker’s position, pigeonholing him as claiming pure symbol manipulation is enough. Here’s a link to the paper: So How Does the Mind Work? I hope it’s accessible to anyone who wants to read it; I was asked for my university proxy for download. I googled on pinker “so how does the mind work” and only got 4 pages of hits, so it’s not too hard to follow up on it.)
To head off any objections to that, let me say that I personally don’t agree with Searle. I’m not sure about agreement with Fodor, perhaps because I don’t understand his objections to the depth required to accept or reject them. I think that pure symbol manipulation (a la a Turing machine) is not enough to explain intelligence/consciousness. But then, I also think it is possible to use a Turing machine to “go beyond” simple symbol manipulation.
This is a slight hijack, but coincides with my objections regarding logical inference and dovetails with what I think of as part of Fodor’s argument. I see it akin to the argument regarding abiogenesis. I think it goes something like this: there is prima facie evidence that humans can perform symbol manipulation. From whence do these symbols arise? You imply that it is associations of input and output. And that works for reactive systems (“lower” animals), but doesn’t explain how we get symbols from the associations.
Note that I’m not saying it is impossible; quite the contrary, I believe that symbols (i.e., language) do arise from neural structures. (And yes, to some degree, one might say this contradicts what I said in an earlier post regarding the need for more than just memory and prediction. To which I’d respond, “Not really; I grant that memory, prediction, and analogy are prerequisites for intelligence, but that’s not enough of an explanation.”)
Let me put it this way – I don’t think that there’s a real (perhaps I should say comprehensive) theory yet as to how this occurs. There’s a big difference between being able to train a neural network that performs associative memory, particularly the limited stuff that has been done so far, and integrating it into a seamless whole that explains, end-to-end, how the mind works. We just don’t have the tools to even analyze neural networks yet, much less a really good understanding of their operation or the ability to put what we do know together into a complete package.
At any rate, your project sounds neat; I hope you keep us informed.
Nothing to add to the discussion, really; I just wanted to thank Digital Stimulus for succinctly clarifying of one of the most vexing questions in cognitive science/philosophy of mind.
Most authors I’ve read either avoid that question like the plague, shrug their shoulders, or break into an arms-flailing tap dance. I’ve tried to tackle the question myself, and despite liberal infusions of both caffeine and oatmeal stout, have made no real progress whatsoever.
If anyone has information on work being done in this area, I’d love a reference.
Why, thank you! blush I hope that made up a little bit for my gaffes earlier in this thread.
A couple months ago, I was party to a chat with Tom Ziemke. He mentioned that one of his graduate students was working on neural network analysis that might provide some answers (or at least a beginning) to extracting symbols from them. I haven’t followed up on it at all, but it sounded really interesting. If you look into it, I’m really interested in what the current state of his research is.
Thanks for the link- it will take me a while to chew through Ziemke’s PDFs, but on first blush, it looks like he’s exploring some of the same territory as Andy Clark’s active externalism. Fascinating stuff.
FWIW, Searle’s point, as expressed in his recent Mind: A Brief Introduction, is that the debate between dualism and materialism is the wrong question. He puts it forth in a fourfold thesis:
-
Conscious states with all their qualia are real phenomena; not just an illusion. “We cannot do an eliminative reduction of consciousness”, which is what he tries to illustrate in his handling of the Chinese room.
-
They are causally reducible to neurobiologic principles.
-
They are brain system features, not features of individual neurons or synapses.
-
They function causally in the real world.
He sees modern cognitive neuroscience as the approach to understanding the nature of consciousness and qualia but feels that we are a ways off from understanding it. I see Grossberg’s work as having a handle on it myself.
Thanks for posting that. Looks like I’ll be skipping that book. I find it odd that he feels it’s the “wrong question”. Personally, I don’t see how it can be avoided; I honestly believe it’s an either/or question. If you have an answer to it, it seems to me that it is part and parcel with answers to many related questions.
Now then, as to why I’ll skip the book:
Well, that first is an unsubstantiated assertion. Personally, I find the “philosophical zombie” argument to be silly (and, by extension, the need to posit qualia). As to the second, it’s like arguing with a proponent of intelligent design.
Nice way to beg the question, assuming that the inclusion of “biologic” dismisses computers.
I wonder if anyone actually makes this argument. It seems like a silly thing to say, but perhaps he relies on it for a different part of his general argument.
OK. I’m not entirely sure what this does except avoid the materialism/dualism question. To be causal, either conscious states are physically grounded or there is some “magic” that connects them to the physical body.
I admit that I’m no Searle expert; perhaps I misunderstand his position.
I too will be skipping his book.
Now this may sound dogmatic/extremist or egotistical, so I apologize in advance, but here it is:
To me, it is so obvious why the Chinese Room thought experiment is flawed that I would prefer to read other authors that have valuable insight into the problem. And I am going to read up on those other links posted previously, they look interesting.
If his point is that the Chinese Room experiment is not an example of intelligence, then we are in complete agreement. But it’s so trivial it means nothing.
Again, while he may be trying to refute the computational model of mind, he has not come anywhere close to doing that.
Of course a simple lookup table does not “understand” anything in the sense we use it. But then, as noted by others, the definition of “understand” is a critical part of this problem.
To me, this is what “understand” means in our current context:
The “intelligence” has a model of the world (relevant to it’s goals and context) such that new input is processed in such a way that the output generated moves the “intelligence” accurately towards it’s goals.
Note: This is by no means a perfect definition, just something intended to show generally what I think it means. In addition, words like accurately do not need to be 0 or 1, it can be a continuum because no creature has perfect understanding of anything.
I don’t think you can have intelligence without goals because the solution to any given problem requires a perspective.
I also think that consciousness/awareness are merely a by-product of incorporating one-self into your own mental model of the world. Nothing more. I know from evolution/hard-wiring that I want to generally avoid pain, that’s me in my own mental model.
Maybe a computer can’t be conscious/aware, I just haven’t heard any arguments that covince me. I also don’t think our awareness is magic, certainly impressive to see everything our brain is able to do, but it doesn’t mean there is some undefined “thing” going on that creates this condition.
As for symbol processing, I think that just because we process symbols at SOME level, does not mean that is the foundation of our brains/thinking. It seems to me symbol processing is one of many functions that are built on the type of computation that neural networks perform. Therefore, taking that one thing and creating a model based on that does not seem at all applicable.
I’m glad I’m not the only one who feels that way. It sounds like we have pretty similar views.
To circle back to the OP, it seems to me that you raise another aspect that Hawkins leaves out – that of reflection (or self-modeling). That’s a rather huge thing to overlook; I don’t think it can be derived solely from memory and prediction. If it can’t, doesn’t Hawkins’ explanation need to be “memory, prediction, and reflection”?