The Blank Slate

That’s what I had guessed.

Yes, I’m rather surprised, given the little I’ve read, that no one ever forced me to read up on it.

Yes, I like what I’ve read thus far. The emotion part is of particular interest because that’s one of the things people in my lab are looking into.

DSeid and Digital Stimulus,

With respect to back-propagation:
I agree, we are not going to emulate human thought with back-prop.

Well, “symbol” is used here as the means to perform computation. That is, a syntactic entity as might be found in a universal Turing machine.

Exactly why they aren’t sufficient.

As I said before, I don’t think anyone disputes that the brain is a neural net. The issue is more along the lines of “how in the hell does it work?”

Welcome to the “stumbling block” club. :smiley:

Oh, no. I too think a neural net can handle all three layers (using ourselves as concrete evidence that it can). We just have little idea how it does it!

Right; it’s not a biologically plausible model. But it’s a really nifty technique for other things. There’s a distinction between biological and non-biological nets that is important. For some purposes, non-biological models are just the right tool for the job, if you’ll allow me some programmer-speak.

Sorry for the triple post, but I think I was a little o’erhasty in this response and feel the need to clarify. The above refers to computationalism (as in the article to which SentientMeat linked). However, I was using the term in a much looser sense in the bit to which you (RaftPeople) replied.

We seem to be able to use and manipulate symbols, as when we do propositional logic (to take a simple example). The power of symbolic manipulation is indisputable; surely something we want to attribute to human cognition. But computationalism is (may be?) the best model we have for how this all works. (SentientMeat – know anything about dynamical systems or other models?) The question, then, is: how do we reconcile symbols (as in computation) with symbols (as in the ambiguous, ethereal, multi-representational concepts we use so deftly)?

If consciousness really is exclusively a product of brain matter and we accept that brain matter forms a neural network, then we have to explain the basis of “symbols” (in the loose sense). We have some of this, for instance at the percpetual level, but not at a higher level. Furthermore, if computationalism is correct, we need to explain how such symbols (again, in the loose sense) are related to / are derived from symbols (in the syntactic entity sense).

Good – I feel better now for having clarified that.

Y’know, I had missed this before. Perhaps it’ll put the discussion back into the realm of Pinker. Here goes…

Is it clear that there are innate prototypes? I’d assume that they’d have to be genetically encoded; is there evidence of this?

Personally, I’ve always viewed language as merely statistical learning; since we exist in space/time, obviously objects (which exist in space) and actions (which exist in time) are foundational in our existence. If language is taken strictly as a means of communication (ignoring its use to augment “mentalese”), then it seems to me that there is a very limited set of things that need to be expressed. With a small enough basic set, there’s a very finite number of possible ways (i.e., permutations) in which these things can be combined. I believe these permutations are described by the universal grammar. I always felt like Chomsky was imposing a description on the world, rather than providing an explanation. Is there any consensus about that?

Guys, guys, I only just know what these are. I’m not even sure what they’re capable of or not, and certainly can’t comment on how ‘promising’ each might be.

We have the computer scientist and the clinical experimentalist here, so I’ll offer to be the philosopher (not that I’ve any recognised accomplishment there either, but hey ho). I’d suggest that my job is to stop each other talking at cross purposes: in this thread it will likely be trying to point out the scope and limits of computational psychology (which is, of course, the subtitle of The Mind Does Not Work That Way).

So, what I appear to have kicked off here is a discussion of how far down top-down (ie. syntactic CTM proper) gets these days and how far up bottom up (connectionist neural modelling) gets us. And the only thing my level of technical expertise allows me to say with confidence is that the gap is no Sistine-Chapel-ceiling finger width. It’s more like the supernatural how-the-hell? CTM waving frantically to the poor, blind, catastrophically incompetent Neural Network from across the Grand Canyon (which is why I always call an explanation for human cognition and consciousness the Challenge of the Millennium).

And I think one of the major causes of crossed wires comes from what I said above:

I think that Digital ought perhaps to look at his endeavour a little like molecular biologists trying to build an amoeba from proteins. It’s made of proteins, so why the hell can’t we just go ahead and assemble one?

The key difficulty (nay, impossibility) in both quests strikes me as essentially being one of encryption (which I haven’t actually seen explicitly in my admittedly limited reading list – any suggestions? I haven’t read Freedom Evolves yet). I’m not sure we’ll ever know every relevant biochemical reaction on the timeline from molecules to amoebae – somewhere in that timeline evolution might well have used some crucial neat little trick which we simply cannot identify – it’s as though a message has been encrypted numerous times using different keys. Most of the keys are easily guessable (the equivalent of a simple Caesar shift), some of them need vast computational modelling (like PvP). Some, I venture, are effectively perfect, unbreakable one-time hashes. These, I suggest, are the crucial steps which can be characterised by Dennett’s phrase “…and then a miracle occurs”: if a computation exceeds the landauer Lloyd limit, one might as well call it a “miracle”.

So it may well be with this thing we call “consciousness”: the vast computer called biological evolution might yield outcomes which we can never decrypt. Think about how one might get a computer to feel pain. We could simply type a line IF input=X THEN Pain=100. But could that newly introduced variable “Pain” ever actually be pain as we know and viscerally hate it? Even if we make the sensors out of literal biological cells, their output will still only be an electrochemical action potential of a certain voltage. The development of those very cells themselves, together with those in the thalamus (or wherever else) might have required numerous evolutionary ad hoc (or the beautifully appropriate spoonerism odd hack) jumps which we will simply never appreciate, lost as they are in the noise of history. This is what I think Fodor meant when he said that even computational psychology at its most optimistic might not be any of the truth about consciousness.

Anyway, that’s my rather Eeyoreesque philosophical take on the matter. I truly hope that a neural network might one day embody (Lakoff’s favourite term) a syntactic computer-proper, given enough ingenuity from Digital and his colleagues (and, yes, I understand that back-prop was recognised as a dead end 20 years ago – layer upon hidden layer seems the utterly unpredictable way forward). But, for now, just filling in the feasibly fillable gaps is an eminently noble endeavour, so best of luck Dig.

Right. But personally, I’m not necessarily looking for an expert’s evaluation; I admit my ignorance in relation to many of the topics and hope that any tidbits thrown my way might fill in the blanks. At the same time, I fully understand not wanting to hold forth on something one knows little about (although it doesn’t seem to stop me for the most part :smiley: ).

Which appears to be one of the major strengths of ART; Grossberg mentions that he’s given a mathematical proof that top-down is required to keep bottom-up in line.

You mean technical suggestions? Like something with formulas about how to calculate secure keys? It seems to me, from your description, that you have a firm grasp on “encryption as a metaphor for historical evolution”.

So, I recently attended a seminar on “emergence for the reducto-phobe”. As I said above, lack of knowledge doesn’t usually stop me from making claims (qualified though they may be), and I did exactly that. I said that I didn’t understand the issues involved – it’s not clear to me why people want to attribute causal efficacy to mental states that are separate from brain states. I also said that, taking my cue from Dennett, while such mental states are convenient for us to refer to and do some actual work (if not philosophical, at least communicative), they are not independent entities that have their own reality; rather they are reifications we create as shorthand to refer to physical states.

After that long introduction, finally, the link to what you’ve said. The philosopher of mind in attendance said, “If that’s the case, then pain is fictitious and you should have no problem with me punching you in the nose.” I don’t get it; my response was that I didn’t deny that pain was real, just that it’s not separable and independent from our physical body. It seems close to a “philosophical zombie” argument in support of qualia. I’ve always had problems with that – the immediate response from me is that it’s a misdirection: no, you can’t have a “zombie” that is an exact duplicate of a person except without qualia. I’m still looking for a simple case as to what the issues are.

So, I think that’s squarely in the philosophical realm. Any help?

God, no, I read them for a living! (Telecomms patents, amongst other things). I meant, as you say, multiple encryptions as a metaphor for the explanatory steps we’re seeking in the emergence of life, cognition or whatever from non-life, non-cognition or non-whatever. Dennett was in Cardiff last year doing a lecture on Freedom Evolves but, damn, I missed it.

Oh, of course - it’s abhorrent proto-panpsychic dualism through and through, the equivalent of positing an elan vital separate from physical processes in cells. But we understand mental states intuitively, whereas configurations of neuron fire just seem too far away from mental states to some that they despair of ever explaining one in terms of the other like we do with life and molecules.

And this is another place where I think wires are easily crossed. “Reduce” is a very ambiguous word. Is it my position that mental states ultimately are (in philosophy terms “supervene on”) physical states? Yes. Is it my position that mental states can be reduced to physical states? Ah, therein lies the Challenge. And my answer is therefore not yet, and maybe never.

So, what you are seeing from such dualists in these debates is a kind of Mind of the Gaps – they mighn’t like the dualism either but it’s just an easier port in the current storm than the utterly unsatisfactory (to them) explanations provided by cognitive science to date.

Hey, I’m right with you there, brother. Again, I think Dennett explores the issues very well in this classic essay, in which he uses various ‘intuition pumps’ to eg. demonstrate that, given what we know about the brain and what some philosophers say of qualia, there could be cases wherein you couldn’t experience your own qualia! In short, if someone says that they can imagine such a zombie, they’re not imagining hard enough.

So I, too, think that philosophers need to reword their objections if they don’t want to sound like 19th century vitalists. But that is not to say that there aren’t major objections worth making, especially on these subjects of reduction, explanation, scope and limits. I think that chap in your lecture should have asked what it is about our biological computer which makes pain painful instead of merely being a simple flag-up of some input signifying damage to the apparatus (which can be ignored). It is the difference between being punched in the nose and being shot in Doom – both involve signals of “damage” being received and processed in offal, but only the signals received at offal called the “thalamus” hurts. I suspect that no matter how closely we study that particular offal, we will not find any distinguishing characteristic which explains this difference. It will be as though the answer will have been strongly encrypted by its historical development. We would just have to shrug and say “evolution done it” - again, a page in the book How The Mind Got That Way rather than Actually Works.

All-You-Can-Eat (sorry) Zombie Resources

(Personally, I’ve never liked Chalmers take on Zombies. I find more of interest in
Jaron Laniers work, despite his, er, informal style of discourse.)

I have to confess that I threw out the terms “reductionist” and “eliminitavist”, which kinda set the philosophers on edge. :smiley: At the same time, I did preface it with a “I’m most likely using these terms incorrectly…”

So, there’s another example (stolen shamelessy from someone in my office) that I think is better for these purposes. (And I go into this not looking for answers necessarily, but just for discussion.) Pain is really not such a good example of a non-physical mental state, as it has obvious physical ties. What about a data structure? That is, a purely conceptual…um…thing…that we can talk about, assign properties to, etc. Clearly, we can use and manipulate such symbols (in the loose, non-syntactic sense); how can this be? It seems to me that such mental states (i.e., concepts) need to be characterized in a functionalist sense. But certainly, that’s merely descriptive and not explanatory.

Furthermore, as this came up as an afterthought to that seminar I mentioned, does it make sense to talk about such a thing being causally efficacious? Would it make sense to deny some mental states (e.g., a data structure) efficacy, but not others (e.g., intentions)? Or am I perhaps making a category error, falling into a pseudo-dualist trap in which I’m assigning mental states the properties and qualities that I’d deny in other circumstances? This is all fairly murky to me, as I think my philosophy of mind isn’t as robust as it needs to be…

I should preface this with a note that I’m not trying to be insulting (especially not personally), but I am trying to be disagreeable enough to provoke a response.

Perfect example of why I generally avoid Laniers’ work. It seems like there might something of merit buried in there somewhere, but I’ll be damned if I can find it.

He has a problem with Dennett, that’s for sure, and it seems science in general (at least, as normally practiced). Is he a pantheist? A continental philosophy disciple? A “clever” person who takes delight in mental masturbation? A near-total knob? It’s not clear to me (well, except that last).

It looks to me like he’s deperately clinging to obfuscation in an effort to retain his self-importance – and what I mean by “self-importance” is the primacy of egocentric experience to the point of arrogant, yet adolescent, stamping of feet and gnashing of teeth meant to proclaim to the world “Not only am I unique, but I’m important!”.

But what do I know – after all, you can’t argue with a zombie, and I’m most definitely a zombie.

Do I misunderstand? Can you decipher what he’s talking about?

Dig:

There is no clear path from gene to prototype but there are clear hardwired starting points. Most simply there are the hardwired tendency to complete borders, etc. that lead to various perceptual illusions; these are very much the imposition of prototypes upon experience. And of course there are easy higher level examples. We are wired from birth to respond to certain stimulii preferentially. The tendency, for example, to respond preferentially to facial features, and to particular smells, is inate. These are fairly low level matches that are made automatically. From there we develop an association of those stimulii with warmth, comfort, and being fed, and we associate the particular facial features, and the particular smells, along with other features like voice, into a higher level prototype that we eventually learn to identify as “Mom”. “Mom” perhaps is not, in itself, an inate prototype, but the lower level matches are, and they unavoidably set up the sequence that results in “Mom”, albeit different precise prototypes, in most circumstances. To bring this back to the op, we are not blank slates; a blank slate is an inefficient item to be when some features of our environment will be predictable generation to generation and the best response to those predictable stimulii also consistent generation to generation. In that case the expense of behavioral flexibility would be a drain on resources. Behavioral flexibility is expensive; save it for that which both changes and for which different sorts of responses effect survival and reproductive success.

Well sure. For some tasks catestrophic forgetting may be just what is needed. There is no need that an AI accomplish the goal in the same way as a human brain does. Modelling human function, figuring out how the brain works, and creating AI are not, per se, the same thing.

As to the philosophy side and zombies and qualia: If a zombie was me in every physical way then it is me. How would I prove that an AI experiences qualia? The same I prove that you experience qualia. … Oh right. I cannot prove that you experience qualia, I merely assume so because you say you do and you are put together similarly to me. Consciousness will alway be a difficult bird to study because all we can do is measure correlates of what is reported as conscious experiences. Qualia and conscious experiences are not “things”; they are dynamic emergent processes.

Last, I can’t help but promote ART just a little bit more. Once the basics are understood it is really a very simple and intuitive concept. Of course we always have a constant interplay between our expectations and our experiences, with our hypothesizing the rest of a whole from incomplete data and then being very primed to see what we expect to see. We do it from basic perceptual processes (see perceptual illusions) and we even do at societal levels. An example of the latter is the scientific method: data evokes a hypothesis and the hypothesis evokes a search for specific data that supports it; if we instead do not find it and find data that goes against it then we begin a search for another hypothesis that matches the data better. Functioning in a real environment always means functioning with incomplete information. We complete the pictures in our heads. What ART does is elucidate exactly how that process occurs and how and why we vary the degree of match we need (what vigilance is required).

No real argument here. Egoic bloviating aside, I still find Lanier’s comments on zombies more interesting than the Chalmers/Dennett exchange. Their marathon nit-picking through contrived minutia buried in layers of hypotheticals reminds me of two theologians arguing over whether Jesus’ sandals were brown or black, and the grave significance thereof.

My opinion on the matter is similar to DSeid’s; a being with brain and behaviors similar to mine would be conscious, with the caveat that there is no definitive test for consciousness, including Turing’s.

Still, the concept of zombies does raise interesting questions: we’re capable of some pretty complex cognitive processing, and even behavioral output, without conscious awareness. Makes one wonder why everything’s not just on auto-pilot.

Well, for me, one interesting point was that just as there is no definitive, objective test for consciousness, there is no definitive, objective test for computation, either:

Putnam, I believe, made a very similar argument. I tend to agree with Lanier’s observation that a Martian landing in my house might very well be unable to distinguish which appliance is the toaster and which is the computer.

Lanier also points out that in computer science, “information” is a concept that’s loosely defined and troublesomely subjective. There appears to be no objective criteria for differentiating information from noise.

Well, we’re now squarely into philsophy of mathematics territory (have you read Lakoff and Nunez Where Mathematics Comes From BTW? Essential reading IMO), but I think the same objection applies. Put me in a brainscanner/imager and tell me to think about that data structure, and you’ll again see physical activity just as if you’d punched me on the nose. The question remains: what is it about that activity in those neurons which makes it “conceptualising”, when activity elsewhere is “pain”? The pain example is no less descriptive, really. Of course, I accept that that activity is what pain is, just as the cellular processes is what life is, but I hesitate to call it a complete explanation.

Well, if you’re a physicalist then no matter how abstract or ‘conceptual’ a mental state is, it must supervene on the neurons (or glial cells, or whatever). All thoughts or differences in mental state are ultimately physical changes, somehow.
But that is not to say we can, or will ever, identify the relevant physical changes: again, I think the encryption metaphor is useful. I suggest that some concepts can be characterised as an average of memories - eg. the ‘concept of a tree’ is based on all the trees you’ve ever seen in your lifetime. And since that encoding activity is now lost to history (since we didn’t detect it back then in infinite detail), we’ve effectively lost the ‘key’. The activity in your brain when you conceptualise trees is now no more decodable than the apparent white noise from perfectly encrypting coding.

This balance between what’s innate and what comes from the environment is, of course, as old as philosophy itself. (I strongly tend towards the latter, actually, but there clearly are innate modules which selectively act on some kinds of input exclusively, such as language and Cheater Detection, say.) You might remember this thread, in which I sought to estblish the physicality of visual memories. Visual cognition seems to me to be an important part of all kinds of ‘thinking’ - even communication. One reason I don’t understand Chinese is because no utterance or written shape is associated with any memories of mine, whereas sentences in English can be mentally Photoshopped immediately (or, for the congentially blind, Cubased immediately or something, and for Helen Keller, … errm patchwork-quilted together??). Saying that people can think in sentences might draw scorn from linguists and philosophers alike, but I don’t see a problem with it.

I suppose I could have worded my question better. It is not only obvious that there are top-down effects/constraints at work, but experiments support it (as does Grossberg’s proof?). What I was really asking about was specifically the innateness; that is, are there experiments to back the assertion up? Sure, I’ve seen some results concerning infants and face recognition. Are we sure there’s hard wiring going on and not statistical preference? If there is proof of literal hard wiring, are the mechanisms by which the wiring gets hardened understood?

Now, I might have an issue with this, but it might be due to semantics. Emergence implies a quality or characteristic that is not dependent on “lower levels” (e.g., “mental states” have properties and such that cannot be derived from neuronal function). I’m with Dennett on qualia; IMHO, they are chimeras.

The way I always read Turing’s test is exactly as you describe; that is, the only sufficiently definitive test for intelligence/consciousness comes down to “it takes one to know one”.

The problem is that Lanier doesn’t want to claim there’s a definitive anything. (rub, rub, rub Definitions? Pah – they’re for analytical philosophers. stroke, stroke, stroke Objects? No, no, you misunderstand; there’s only waves. Unless you’re talking about waves; then there’s only physical reality. whack, whack, whack Consciousness? Oh, you must be a poor misguided zombie. AAAH! Mental orgasm!)

The thing is, there is a precise definition of information (see Claude Shannon). With that in hand, there is a precise definition of symbol, which in turn provides a precise definition of computation. Perhaps you mean semantics? Perhaps that would be better characterized as meaning? If you (or Lanier or Putnam…and I hesitate to question a giant like Putnam) are saying that informatin can have different interpretations – and subjective ones, at that – sure, I’ll agree. But I’m not really sure what the objection is. Can you clear this one up for me?

Dig, you are asking two things that really require overlapping answers. I’m at least not alone in my viewpoint. I’ll quote a little from Edelman’s Wider Than the Sky which I have really just begun to read*: “consciousness is a process, not a thing.” (p.6 setting up the case that it is the pattern of activities that determine conscious experiences.) I believe that the search for a genetic marker of a prototype, or for specific hard-wiring of a prototype, is a categorical error. Analyzing specific neurons or genes will as fruitful to understanding the dynamics of consciousness as the analyzing of individual water molecules is to the understanding of wetness and of fluid dynamics in a vortex. The brain is a massively non-linear dynamic system subjected to external forces. Chaos theory tells us something about how to understand systems like this and about how not to. Attempts to follow the individual billiard balls are doomed to fail; attempts to understand which patterns are stable (attractor basins) and which are not, and how external pacers influence the systems are much more useful.** It is the dynamic pattern that matters even if the pattern is emergent from the individual bits. I can cite articles showing just how far rat barrel cortical cells can adapt to other stimulii and how far they cannot, or a host of other cortical wiring patterns that are “experience-expectant” (develop fully without experiential input) vs those that are “experience-dependent” (only develop fully with environmental inputs, idiosyncratic or otherwise) but that really doesn’t answer the question I think. The appropriate level of analysis is higher than that, since the neuronal columns are not themselves prototypes any more than individual water molecules are flow.

*You had asked about Edelman visavis ART and consciousness. I’m not far enough yet to comment too much but so far his “dynamic reentrant interactions” do seem similar in form to what Grossberg would refer to as resonant circuits. ART just seems to be a bit more powerful of a conceptualization so far. I’ll comment more as I get around to reading further.

**Chaos theory also tells us something else that I for one find fascinating: massively nonlinear systems have a tendency to be self-similar at different levels of analysis. Which theory shows that self-similarity tendency from basic perceptual processes to the societal level, eh?

Disclaimer: I’m about to tread on very thin ice here. My knowledge of computer science/AI is sparse at best, and my readings of the subject were done quite awhile ago. Please treat whatever follows as tentative statements; perhaps even as inquiries.

Yes, I was using the word “information” to refer to data content, not data transmission (IIRC, Shannon’s definition was strictly related to the latter, not the former). I apologize for not making my usage clear; in philosophy of mind, and especially in the Zombie Wars, I’ve frequently seen the two meanings used interchangeably, with predictably confusing results.

I believe Lanier’s objection boils down to: one man’s signal is another man’s noise. Data isn’t information unless it’s interpreted as information. Interpretation implies subjectivity, and subjectivity implies subjective experience… which Zombies aren’t supposed to have.

I’m pretty tired tonight, so I’m gonna keep this brief.

I haven’t read it. But using numbers as the example occurred to me; to be honest, I just couldn’t remember the appropriate term (ordinals?).

This does bring up all kinds of issues, though, doesn’t it? (Out of my league, so an honest question.) For instance, semantic content. Isn’t there some issue with identity also? Something about duplicating a person exactly, punching them both in the nose, and them experiencing the same brain states – but, obviously, their pain is different, being “located” in different people.

I have to say, I’m pretty OK with leaving things at a functional description; after all, I do consider myself a physicalist. Although I would like a deeper explanation. And obviously, lots of people aren’t OK with functionalism. More on that when I tackle a response to other-wise

I still question the innateness of language; note that I’m not claiming that language isn’t innate, just that innate and hard wired are awfully strong terms. I see that DSeid’s post deals with that more directly. Drat. I’ll have to put it off until tomorrow. As far as understanding Chinese goes, I tend to think of the mind as adhering to it’s own hermeneutic circle, at least as I understand more recent work in hermeneutics (i.e., not bible interpretation). That is, it’s not a closed system, but does allow “input” that accretes via experience and learning, progressively giving rise to an expanded conceptual “library”. Strange to find myself squarely in the contiental philosophy camp (shudder) in that regard.

Augh…it’s too late and I’m too tired. Was that at all comprehensible?