The Blank Slate

For the past month, in between schoolwork and regular work, I’ve been working my way through Steven Pinker’s book The Blank Slate. Its thesis, broadly speaking, is laid out int he preface:

(Hopefully I retyped that without too many errors).

At any rate, I’m finding the book’s central thesis–that, as he later states, heritability accounts for about 40-50% of personality traits–to be extremely persuasive and somewhat shocking; in some ways, I suspect this book is going to change my understanding of the world about as much as, a decade ago, Isaiah Berlin’s The Crooked Timber of Humanity did.

At the same time, much as with Berlin’s book, I’m disturbed by some of the details. Pinker dismisses constructivism in mathematics education (p222) in a manner that betrays an unfamiliarity with the philosophy–or, to be scrupulous, an unfamiliarity with the approach as it was taught to me by an adherent. He talks about the “version of leftism known as political correctness” (p. 287), as if this were a political movement instead of an epithet.

And, most trivially but most tellingly, he refers to Public Enemy as a “gangsta rap group” (p 329). Sure, this is an easy mistake to make for someone unfamiliar with rap. However, he’s talking about their song 911 is a joke, and he’s using it as evidence for his claim that “Inner-city African Americans” have developed “a culture of honor,” that is, a culture in which small slights are punished with great violence. He claims that when there is no strong police body to whom a people can appeal when a wrong is done to them, such “cultures of honor” naturally arise as a means of self-protection. Calling Public Enemy gangsta rappers feeds into that theory.

The problem, of course, is that they’re not. Indeed, from my (admittedly limited) understanding of their work, they are politically active and politically militant. Their solution to the joke of 911 wasn’t to strap on weapons and kill motherfuckers who dissed them; their solution to the joke of 911 was to organize, publicize the injustices, and demand that the system be changed. In other words, their approach was to render any “culture of honor” obsolete.

I don’t know for sure that acknowledging Public Enemy’s true colors would have undermined his point; the idea behind the culture of honor seems reasonable to me, and draws on Hobbes and other fairly reputable philosophers. Still, I am bothered by the little inaccuracies in the book: it makes me wonder what else he got wrong, what he may have distorted in areas with which I’m less familiar.

At any rate, I’m almost done with the book (having just finished the chapter that concludes that parental upbringing, when not actively abusive, accounts for between 0-10% of the variance in personality of adults), and I thought it’d be interesting to discuss his thesis on the boards, or to discuss the book if others have read it. I’ve read several reviews of it, almost all of which were glowing (there was one in The Guardian, I think, that was less enthusiastic, but not specific in its criticisms); if anyone knows of reviews that specifically and intelligently trash it, I’d like to read those as well.

Daniel

It’s always surprising to me that the 100% nurture (cultural/societal/whatever) position is even considered seriously. When I skimmed through Blank Slate years ago, it struck me as being so obvious – akin to the gradual vs. punctuated evolution arguments. The fact that Pinker even had to make the argument astounded me. It would be so nice if the real world was so simple that anything could be 100% one way or the other…

At any rate, my big problem with Pinker is (or was, possibly, as I’ve not kept up with his research) that he always seemed to take a hardline Chomskian position on the innateness of language (and I readily admit that I have only a shallow grasp of his actual position; perhaps I’m not doing it justice). I’d think that the same general stance – that is, that language formation is at best partially innate – is most accurate. And I always read him as dissing connectionist models, which is just misguided, IMHO.

At any rate, I think if you’re looking for critiques, you might want to start by googling “pinker fodor” – you’ll come up with a wealth of back and forth analysis, although I think it’s more to do with Pinker’s How the Mind Works and Fodor’s The Mind Doesn’t Work That Way. That should get you in deep (and quick), while covering some more recent work, albeit in a rather focussed area.

I believe the standard critique of Pinker is that he’s a bit of an unreformed Sapir-Whorfian. Whorf is the guy who claimed that the Inuit experience reality differently because they have so many words for snow. His premise and his conclusion have both been pretty discredited. The idea that language determines reality exists now only in the realm of “What the Bleep Do We Know?”-style pseudoscience. Surely language plays some part, but not in the deterministic way that Pinker seems to suggest (at least in his other books).

**Digital Stimulus**,  I think that there are many intellectual circles that still think of human nature as entirely malleable.  Certainly the more utopian anarchists and socialists believe that the structure of society is what creates greed and social problems, instead of human nature.  I guess you wouldn't call them mainstream, but I think their ideas still carry a lot of weight even in the mainstream left.

Really? I’m pretty sure you’re wrong on this; I believe I learned about Whorf from The Language Instinct, where Pinker roundly mocked him. I never got the impression that he thinks language determines reality. On the contrary, I believe his argument is precisely the opposite: the reality [of our genes] determines the structure of our language.

Digital Stimulus, the pointer to Fodor was great. Alas, I’m not sure I understand Fodor’s objection entirely:

What I wonder is, what else could account for our minds, if you accept natural selection? I can understand debating which traits of our personalities are innate (and therefore evolved, and therefore adaptive traits) and which traits are cultural (and may therefore not be adaptive for us personally); but if you believe that a trait is genetically determined, it’s gotta be adaptive. Unless you think the selfish gene model is literal–that the selfish gene is some sort of Lovecraftian elder god determining our destiny–then it seems an unremarkable step from saying a trait is heritable to speculating on what adaptive purpose the trait serves.

Daniel

Wow, yeah, you’re right. I had it exactly backwards. Time to read “The Language Instinct” again. (I wonder who I was thinking of).

Yeah, it truly amazes me. As the frog found out, sometimes a scorpion does what it does just because it’s a scorpion.

Well, it’s very gratifying that you found the suggestion useful.

I have to put a disclaimer on this for a few reasons; first, Fodor’s book is skimpy and (IMHO) tries to do more work than it has the muscle to do, and second, I’m not well enough versed in philosophy of mind to give an accurate critique, much less an accurate summary. One thing to note about Fodor that, it seems to me, a lot of people miss, is that he doesn’t claim to have an answer, but rather points out the paucity of explanatory power of claims being made. For instance, from what I understood of The Mind Doesn’t Work That Way, the following is Fodor’s key objection:

My understanding (at least of the first part of the linked article) is that Fodor finds the “massively modular mind” (MMM) logically untenable, due to the need for “something” that oversees the whole shebang. In my mind, he’s making the point that if the mind is like a Prolog system – a set of logical facts and predicates that are syntactic in nature – there is no mechanism that can possibly perform the requisite computation to detect (in)consistency. In other words, given the vast set of facts each of us possess, how can we possibly (using his example from the article) know that our visual perception of the size of the moon is flat-out wrong?

As to the evolution objection, your question I quoted above sums it up exactly. What it comes down to is that we have no frickin’ idea how the mind relates to the brain. What kind of science is it to claim that evolution is responsible for it with zero evidence that that’s the case (not to mention even a single workable hypothesis)? As he says:

Then, you put the two together and you get this:

But the assumed local adaptation cannot be responsible for global properties/qualities. And I really liked the way he laid into evolutionary psychology thus:

As I read it, he’s just pointing out that Pinker and others are falling victim to the same pitfalls and fallacies as previous evolutionary theorists (for example, see eugenics or Lamarkian inheritance).

Thanks for pointing that article out to me; it was pretty straight forward and succinct. It also supplied some thinking fodder; I don’t really agree with Fodor’s objections about MMM. I don’t see his issue with it, as I think the mechanism that allows one to reflect on one’s own percepts/concepts would be sufficient to do that work. But then, I also fall victim to not having an actual hypothesis as to how that would work (or even arise), so my opinion is just as bad (or good?) as another.

I should put in my own disclaimer: I find Fodor’s writing to be extremely dense, and it’s very possible that I’m missing some of his central point, despite having read back and forth over that review a few times.

That said, I am not sure I agree that he successfully points out problems with the theory. As I see it, his objections fall into several areas:

  1. The Conspiracy Problem. How can we say that the brain evolved (for example) friendship as a way of increasing survival, when it’s equally possible that friendships developed for other reasons? Here I think he’s taking the “selfish gene” metaphor overliterally. If an inclination toward friendship increases the likelihood of one’s children surviving to reproduce, then that inclination will spread throughout the species. There’s no motive behind the genes, just a random mutation in the gene that is selected for. True, there might be some more immediate reason that such an inclination would lead to survival, and the method by which it increases survival is up for debate, but the idea that such an inclination leads to survival is demonstrated by the fact that it’s part of our inheritance. Bringing motive into the picture confuses things unnecessarily.
  2. There’s No There There problem. He doesn’t understand how we could evaluate the meaning of a sentence (among other things). While I don’t understand that either, I’m not sure that shows a shortcoming in the theory; it may equally show a shortcoming in my understanding of the theory, or a shortcoming in everyone’s understanding of the specifics.

It seems to me that we do have a frickin’ idea, based on such factors as open-brain surgery and brain injuries. When a certain part of the brain is severe, people can recognize objects but not name them; when another part of the brain is touched, people experience old memories very vividly; when a third part of the brain is destroyed, people’s ability to evaluate choices and exercise self-control suffers badly. Don’t such types of research give us pretty good ideas of how brain tissue and consciousness intersect?

Well, science is the best explanation at any given moment, right? Nobody to the best of my knowledge has either falsified the idea that the brain evolved, or suggested a theory that is more “elegant.”

This discussion is kind of at the limits of my understanding of this science: it’s something I read about almost purely on a sporadic, hobbyist basis. My apologies in advance for any newbie errors I make in it!

Daniel

Pshew, busy day. Sorry about the delay, and sorry that my mind isn’t really in this right now.

I’m not sure he does either. While I think that article you linked to was straightforward and succinct, I have a similar complaint with it as I do about The Mind Doesn’t Work That Way (from here on out, TMDWTW, if I want to refer to it). That is, it sounds like he’s raising valid points, but for the life of me, I can’t figure out the real meat of the argument.

It seems to me that some of his objections are the same as debates among gene vs. individual vs. population evolution. I think “demonstrated” is too strong a word to use as you do, but I also have reservations about accepting his objection. I just don’t have a firm enough grasp on it to decide.

I think that’s the problem – there are no specifics. I was attending a philosophy of mind colloqium a year or so ago, and one of the people quoted him as saying that computationalism is the best theory we’ve got (and may well prove correct), but that he thinks it’s not adequate to give a deep explanation.

And it seems to me that there really is a serious problem with “mind as computer” – syntax matching, the foundation of Turing machines (and thus all of the science of computing), is a pretty clunky mechanism to cover what it appears our minds can do. Personally, I think it’ll do; but lordy, lordy, I have no real substance for that cherished belief o’ mine.

Sorry, I was overstating for effect. Although, I would point out that we don’t really have a well developed idea of the “intersection”. I mean, yeah, we can say that area X of the brain is usually responsible for condition or ability Y. But honestly, we really have little idea about how consciousness arises from brain matter. If we did, I’d most likely be out of a job (AI researcher).

Just to be clear, I’m not trying to scrap with you. I suppose one might characterize science that way – it certainly makes it easier to defend the philosophical status of science as the best truth we have. Somehow, though, I expect more. To my knowledge, we’re still at such a basic stage that to call anything a full-fledged “theory” is kind of misleading.

Perhaps it’s just semantics. However, it’s interesting, in this context, to think about how the theories and metaphors we use do shape our thoughts on things. Fodor is right to point out that we should be careful about how much weight we put on evolution; it may end up blinding us. (Insert something knowledgable about Kuhn and revolution, Heidegger and technology, or Stengers and metaphor).

But you know what? We’ve totally gotten away from Pinker. I hope some resident linguists weigh in on this. And I hope I’ll have time in the next couple days to pursue this in more depth – I think I might have to dust off How the Mind Works for a brief review.

So, I dusted off How the Mind Works and thought some initial thoughts might be of interest. I’m only through the preface and chapter one, but that’s enough to make the foundation of the book quite clear. I must say, I had forgotten how enjoyable Pinker is to read.

So, it seems to me that I agree almost totally with Pinker. It’s fascinating to me that he starts off with what he calls “The Robot Challenge”; that is, a discussion of what it would take to design an intelligent robot. This hits close to home for me, as it’s what I do for a living. He treats the matter fairly well, although I’m not sure he truly gets across the difficulty involved.

In setting up his argument, he puts in enough qualifications on broad statements that it’s difficult to actually argue with him. For instance, he says, “The computational theory of mind is not the same thing as the despised ‘computer metaphor.’” Obviously true, and it needs to be stated over and over again, for it’s too easy to fall into the trap of misunderstanding the arguments. But I’m left with the impression of a politician – that is, put enough qualifiers on things and you’re able to then selectively say “but I specifically said…”. I’m not sure I can keep all of them in my head as I read, especially as I’m sure any contradictions or issues will be very subtle.

Furthermore, I do take issue with some of it. Essentially, I have a similar objection as Fodor – Pinker’s treatment of the MMM is extremely shallow and doesn’t say much about the “Mind” (capitalized to emphasize the non-trivial substance needed). What I mean is this: the modules he’s discussing thus far deal mostly with perceputal apparatus. Yes, vision processing is remarkable, as is auditory processing, speech recognition and production, etc. But this is just the first “layer” of the mind, and – as difficult and amazing as it is – the easy part. Braitenberg pioneered work in such simple apparatus, followed by Brooks’ revolutionary work (google on “braitenberg vehicles” and “brooks subsumption” for more).

To put it bluntly, we can get a lot of mileage out of reactive systems, but they don’t come close to having the capabilities we want to require of an “intelligent” being. There’s literally no theory (OK, that’s an overstatement for effect; for instance, see Dennett’s Consciousness Explained or work on reactive planning like Firby’s RAPs) about how the result of these “initial” processing modules gives rise to what we’d consider “mind” in any meaningful sense. And I think that’s where Fodor’s objection about “global” vs. “local” comes in; there really is an issue here. Assuming the computational theory of mind, at the highest level, asserting strict “modules” makes the brain a sequential processor. But if that’s the case, how can a module arise that is responsible for global coherency and control? (Personally, I think the computational model has an answer for this in terms of reflection, but again, have no well-formed theory for support.)

Then there’s the issue of computation as syntactic processing – syntax is not only brittle, but requires huge amounts of processing (on a global scale) to be successful. At a really deep level, we not only have to explain how humans can process such vast amounts of information coherently, but how we can do it in real-time. What we know about computation seems to indicate that it’s impossible – what we do routinely in milliseconds would require lifetimes of computation. And this is not to say that the computational theory of mind is wrong, but that we don’t have any deep theories about how it actually works (or even could work) that aren’t just hand-waving (i.e., step 1: the brain processes input, step 2: a miracle occurs, step 3: we have consciousness). Minsky gave a keynote at last year’s AAAI conference with a similar complaint – specifically, he talked about the need for multiple representations (perhaps “interpretations” is a better word, or even “semantic content”) for the information processed by the brain. And little work (or headway, perhaps) has been accomplished on that front. If you’re interested, he has a working copy of his latest book available on his website that makes this pretty clear.

I thought you might be interested and I hope this discussion continues. If I come across anything else in my reading, I’ll post about it…

I’ve not read The Blank State, but this thread seems to have morphed into a How The Mind Works vs. The Mind Doesn’t Work That Way, which I have (the latter being incredibly expensive for a 100 page paperback, but worth it all the same).

Neither of the two discuss how connectionism and syntactic 'computation ‘proper’ might work together (with one embodying the other somehow), as I think might well be the way forward - indeed, connectionism is hardly mentioned at all in either book. And I think the opposition aspect is alittle overplayed - after all, Fodor admits straight away that the CTM is the only non-ludicrous game in town, and it’s the details which Pinker is so optimistically blasè about compared to Fodor’s Eyore-like pessimism. This isn’t evolutionist vs. creationist, more like two dinosaur experts arguing whether it was just the Xixulub meteor which killed them off.

So, while I think that some of Fodor’s arguments are rather nitpicky, I’d agree that Pinker’s book might need a change of title. Just as a full account of the development of, say, the wing doesn’t actually tell you the engineering principles of how wings actually work, so the book How The Mind Works is yet to be written. But Pinker’s would definitely make an excellent follow-up entitled How The Mind Got Like That In The First Place.

Ugh, sorry:

how connectionism and syntactic 'computation ‘proper’ might work together

Thanks tremendously, Digital Stimulus, for the posts–my lack of response so far doesn’t mean I’m not reading, but rather that I’m absorbing still. Fodor’s objections are obviously fairly new to me, and I need awhile to think about them. Sentientmeat, the point about connectionism and syntactic computation sounds interesting, but I’ll need to refresh myself on those two concepts.

Maybe I can pick out something specific from the book to talk about: he suggests that, when it comes to your adult personality, the variance in it (i.e., the differences betewen your personality and that of other folks) is explained by, approximately:
Genetics: 50%
“Fate” (i.e., the fiddly things that happen to you in your life: diseases, your friends, a movie you saw when you were six, etc.): 40-50%
Family environment: 0-10%.

That last one, suggesting that (within a normal middle-class American family devoid of outright abuse) parenting style has no effect on a child’s personality, strikes me as ludicrous. The facts he marshals–that adopted children are no more like their family siblings than are children from other families, but siblings adopted into different families grow up as alike as siblings who stay in one family–are pretty persuasive. But I still have a great deal of trouble with his conclusion.

Daniel

Like I say, I’ve not read it, but he does make similar claims in HTMW. And I’d go along with them, really: after all, what is the ‘family environment’ but a subset of ‘fiddly life stuff’? Consider how much time from birth to adulthood you spend solely with your family, compared to with your friends, your teachers and schoolmates, or even just alone, and 10% (or 20% of ‘fiddly life’) seems more reasonable. Ask yourself in addition who you were most trying to be like in that period and it makes even more sense. Our familes accept us unconditionally, but our peer group is another story.

As an older friend said to me about my 7 month old nipper, enjoy his complete devotion to you now because you’ll lose a large part of him at school.

That is true–but consider some of the things considered in “home life”:
-Do you give you children whatever they want, or do you set limits?
-Do you emphasize the importance of schoolwork, or do you act as if it can be blown off?
-Do your children hear a lot of music in the home and see adults getting together to play music regularly?
-Do your children hear more than one language spoken in the home?
-Do you take your children on lots of trips?
-Do you encourage your children to fight back physically when confronted by a bully?

I just have a very hard time believing that influences like these add up to less than 10% of an adult’s personality, especially such factors as whether you spoil your children.

Daniel

I’m not being particularly argumentative here, but a few counterexamples:

Either way, they will eventually learn that they simply cannot have everything they want without serious negative social consequences. Again, the education they receive ad hoc from a peer group in this respect might be far more important.

Again, I’d suggest the school itself and the pupils in it would be far more important here.

If there’s a television, they’ll see and hear both regularly. Music lessons and the like aren’t insignificant, of course, but they cost money, and I don’t think one’s personal affluence can be brought solely under the ‘family environment’ umbrella. The whole rest of your ‘fiddly life’ will be radically different, too.

Language is a big factor, of course. But, again, note how many kids speak a different language at school than at home. (Heck mine will be taught in Welsh despite me knowing barely any.)

More importantly, if you don’t, do they just go somewhere else anyway? I suggest an afternoon at a friend’s house can be just as formative as a day somewhere impressive or natural with parents.

I suspect this has a strong genetic influence, actually - for many kids it’s just not in their nature to employ physical force, even in defence, no matter what the encouragement.

Of course, ‘spoiling’ isn’t insignificant either: if the genetic half is true, you’re still talking about a whole 20% of that which can be influenced. But, as I say, I think ‘spoiling’ just delays the lessons in many cases. A kid finds out how the world really works sooner or later. Of those kids who are woefully ill-prepared from ‘spoiling’, what percentage simply can’t or don’t learn to deal with it? Is 20% really that inconceivable?
Of course, you might well be right and Pinker wrong - as I say, I’ve no strongly held position here. But if he is right, think how much influence schoolteachers have on each and every kid in their care. When you’ve finished your training, your first pupils will each become a percent or so YOU.

Sleep tight. :slight_smile:

Yeah, sorry about that. It’s just too damn interesting to let the thread die on the proverbial vine. And it’s difficult to find anyone that criticizes Pinker; he’s just so…so…damned reasonable. :slight_smile:

Thanks for the links; I’m looking over the computational one now, but probably will have to abort to get other things (i.e., work) done. Here’s what I think (one of the major) disputes is:

They do well to qualify that, as it’s not only an open question, but one that may very well be flawed. I think the usual objections laid at the door of the positivists apply here as well. It’ll be interesting to see if and how they discuss it.

As to connectionism (which I’m not going to read the link right now), it’s not surprising they don’t mention it – it’s the elephant in the room. So little is understood, it’s almost useless to even attempt to use it as a bridge between brain and mind. Sure, Elman networks have been shown to be able learn language rules. Sure, Kohonen feature maps are stellar classification machines. Sure, Hebbian learning is biologically plausible (in fact, grounded in ethology). But we have little idea how to extract representations from any neural net currently in existence – and it’s not just a lack of theory, it goes so far as to not even having an idea of how to design tools to analyze theories. (Nonetheless, given all that, I’m pretty confident that we will; I don’t mean to imply otherwise.)

Ultimately, at the layperson level, it seems to me that Fodor and Pinker agree substantially. I think the same can be said in general of cognitive scientists that are taken seriously. But when you delve into the details, as you say, there are substantial issues that need resolving.

To get in line with the direction the discussion is now taking, I very much find Pinker’s claim about parental influence credible. Don’t know if I’d put it at 10%, but it’s certainly way down there. I think of it like this: the time when I personally was mostly influenced by my parents was between the ages of 0 (I suppose) and 8 or 9. In my opinion, little personality development occurs during that time; mostly, I developed motor control (based on my sports skills, badly, I must say) and other core functionalities. The radical changes my actual personality went through during my teenage years (when my peer group exerted huge pressures) is surpassed only by the changes that occurred between the ages of 20-25. That was when I broke free of peer expectations and started analyzing my beliefs and behaviors. Now, I’m encrusted with the results; while they can be changed (possibly), it takes a buttload of work to do so (blame it on Quinian determination).

Once again, when I honestly consider Pinker’s arguments, they seem spot on to me.

So noted; it’s great to be a part of this discussion (and thank you for starting it). As I said to SentientMeat, it would be a damn shame to have the thread drop off the first page.

To stay true to the op, I’ll start by chiming in on The Blank Slate. I thought it was been the weakest of Pinker’s works exactly because, as pointed out early in this thread, it is so patently obvious to me and it amazed me that a book of this size is needed to make the case.

Left Hand please remember that personality is not values. Other than by my genetic contribution I have had little influence on my biological kids’ personalities and none on my adopted daughter’s, yet I have a high degree of influence on their values. The importance of that cannot be overemphasized. Still, as a parent I depend on their natural strengths outweighing my capacity to screw them up ;).

Secondly, I’d like to ask Digital and SM what they think of Steven Grossberg’s Adaptive Resonance Theory and CogEM models in which he portrays conscious states as being resonant states. If you are not familiar with his work here is a link: http://cns-web.bu.edu/Profiles/Grossberg/ and specifically- Grossberg, S. (1999). The link between brain learning, attention, and consciousness. Consciousness and Cognition, 8, 1-44. Preliminary version appears as Boston University Technical Report, CAS/CNS-TR-97-018. Available in PDF (Gro.concog98.pdf) (336Kb) and postscript (Gro.concog98.ps.gz) (761Kb) - linked to on that page (modestly I point out the most recent addition to his list) Clearly he is consistent with the fact that the mind is a massively nonlinear system subjected to a variety of external forces. Also what they think of the body of work on Neural Correlates of Consciousness and specifically the concept that consciousness is actually nested sets of looped processes building out of the need to distinguish the changing entity of “self” from “non-self”.

Dig as an AI guy, I’d love to also get your response to a hugely digressionary concept: the modelling of creative thought. I believe that most concepts can be concieved of as objects in an n-dimensional conceptual space (I have actually even found that others have had that same conception - Peter Gardenfors for example) and think of metaphors and creative thinking as being the very real geometric rotation, translation, and transformation of those objects within that conceptual space and finding unexpected good fits. I’ve never had either the mathematical expertise in n-dimensional geometry or in computer languages to model this process but wonder if anyone one has had such an idea. Do you know of any work similar to that or if such a modelling is even doable?

Thanks for your thoughts in advance.

Funny that you bring Grossberg up. Prior to dusting off HTMW, I was in the middle of Arbib’s Handbook of Neural Networks and Brain Theory, and realized I really need to reach some kind of understanding of ART (which, alas, I don’t really have a handle on at this point in time). It was, prior to getting “sidetracked”, next on my list. I’m looking at the paper you cited right now, though; I’ll give my thoughts on it later.

I wonder how ART relates (if it does at all) to Edelman’s work/theories (as presented in Wider than the Sky). Gah – I need a refresher on that one also. You guys are loading me up with reading into next year, it seems. And I say that with much appreciation and enjoyment.

This is outside my area (not due to lack of interest, but lack of time). When you mention “creativity”, I think of Barry Werger’s “dancing robots” and some computer “painters”, for which I don’t recall the details (some were featured in Kurzweil’s The Age of Intelligent Machines, if you need a reference). It seems to me that prior to (or in conjunction with?) doing meaningful work on creativity, the representation problem (mentioned above with reference to Minsky) needs some type of resolution. If you wouldn’t mind, can you supply some other names also? I’m not familiar with Gardenfors, but am, of course, interested. As far as metaphor goes, I think you’d do well to look into Lakoff and Johnson’s Philosophy in the Flesh – not really an AI book, as it’s philosophy that’s not really tied to implementation (outside of claiming the necessity of embodiment), but it was excellent.

Now, as far as n-dimensional vectors and such go, I just attended a presentation about using “growing neural networks” as the basis for a robot learning about its environment. Essentially, sensor readings form the (arbitrary) vector of inputs, fed into a Kohonen feature map that dynamically grows to provide coverage of the feature space within some error bounds. It’s a neat idea, but practically useless (at least as presented). What I mean is that the classification ends up being so large that for scenarios of any substance, it would be computationally intractable. Personally, I’m enamoured with Rauber’s work on Growing Hierarchical Self-Organizing Maps, which are very promising to my mind. (It looks like there’s follow up work that I wasn’t aware of and haven’t read yet here).

I apologize for not really providing anything of substance beyond a list of cites and “not familiars”, but there’s really too much with which I’m not conversant. :frowning: I learn more every day though. :slight_smile:

I’m rather busy so just a quick fly-by: Phillip Johnson Laird does good stuff on creativity (I like his programs for improv jazz basslines!) - you’ll find a lot of good stuff on his website at Princeton.