Towards a unified theory of perception, cognition, language, Reality and realities.

I believe that the fundamental change would not be a move towards certainty, but doubt. Instead of saying for a fact what Reality is, we would be required to create models. We would acknowledge that these models were second order affairs which did not, and could not, contain all of the information inherent in the phenomena they were describing. I feel that, ultimately, the knowledge that all generalizations are best guess estimates and probabilistic functions, would in roughly two generations, begin to produce a change in societal consciousness.

None direct, but I think that modern physics gives us the best indirect tool.

I believe that greater and greater refinement through indirect methods, such as physics, will give us better and better models with which to represent Reality. I believe that, however, determining what Reality is not will have far greater implications for perception than examining how we can model whatever it is.

I would argue that the agrarian revolution in early human history would qualify. Aside from that, I would argue that as long as our paradigms remain incorrect, any investment of power will yield flawed results. If we’ve not been properly applying the knowledge we had, it would not lead us down any favorable paths.

I see a difference between individual violence and violence on a societal/national/ international level. Most wars require conceptualization to motivate the massive amount of emotional energy required to fight. Punching someone in the nose however, requires only emotion and ends with, chances are, little actual violence.

This is not in dispute. The weak S/W/K does not call for it, either. The point is that language, as a tool, can influence thought, perception, and reality. As such, it seems that we should take control of language like any other tool which we use.

Agreed. But I would also argue that interaction with other humans is indeed part of Reality. Thus, the perceptual habits of those around us will, ultimately, contribute to our own realities, however slightly. I recognize that the questions I’ve asked may be deceptively simple, and the answers may (and will most likely) include more than simply one discipline out of linguistics, biology, physics, sociology, economics, etc… I think that ultimately the whole system is in a state of interpenetration.

Agreed. The situation then becomes one of changing what people view as necessary and when they are dependant, and perhaps change the principle of scarcity and adopt Buckminster Fuller’s concepts of synergistic geometry as he applied them to societal/political/economic problems.

Agreed, and thus, in my mind, all the more important that we all communicate as best we can.

Can you elaborate on this, I do not grok in fullness.

What is the change in reality was coupled with an organized change in Reality?

Perhaps… but I feel that if we were to allow maximum freedom for individuality while respecting the sanctity of individual lives, we would be able to adopt a non-zero sum paradigm.

I suppose you do have a point, that which is useful is often more important than that which is true. I’m not quite sure how to deal with that contradiction…

I’m not quite sure I want to touch this example as it looks like it’ll hijack the thread into a discussion of Psi, or what have you. And again, I admit, I don’t quite grok the significance of your argument. Would you perhaps be able to rephrase it without taking ESP as an example?

Agreed.
But wouldn’t wouldn’t greater accuracy upon that which we concur yield more efficient results?

And to be fair, create different problems in reality.

Possibly, although I think we can agree that there would be certain baseline perceptions which were socially or evolutionaryly necessary?

Sure.
They took rats, made 'em run mazes until the learned the route. Then they’d cut sections out of their brains and have them run the maze again. The rats functioning in the mazes were reduced, but no matter what portion of the brain was removed, the memory of how to do it was intact. The conclusion of this is that memory is not located in one location but encoded holographicaly. In the link on Pribram I gave you, it is explained exactly why the interconnected network of neurons is modeled as the interconnected result of the intersection of two laser beams.

What exactly is your claim, besides your guess at my reading list? What work of DeValois squared (:D) contradicts what the author of the cite I gave claimed as its implications? In specific, the claim made was they “provide detailed reviews of experimental results that support the conjecture that holography is a useful metaphor in coming to understand the brain/mind relation with regard to perception.”[/center]

Which statement do you dispute? Are there not detailed reviews of experimental results, or do you feel that they are not a valid basis for conjecture about using holography as a useful metaphor? If not, why, exactly, not.
(And, notice, metaphor and model have the same semantic content)

Yes.
Data does not ‘say’ anything, it has to be interpreted. As such, there is data to support the conjecture that a certain model would be useful.

I wish you’d link to a cite, UT’s library browser has inputs for title, author, and subject matter. Searching for “king” and “memory” as the subject yields exactly zero results, and wastes my time while I could be clicking on one of your links.
And I am not sure as to how close your interpretations of the work are, as you’ve described prosopagnosia as ‘loss of memory for faces’, yet, quite clearly “Prosopagnosia, also called face blindness, is an impairment in the recognition of faces.”

There is no reason to assume that an inability to recognize faces corresponds to the fact that the memory is no longer coded for. In other words, an inability to access a memory does not mean the memory does not exist.

As for your other claims, I would very much appreciate links. Especially since “Experiments have shown that when presented with a mixture of familiar and unfamiliar faces, people with prosopagnosia may be unable to successfully identify the people in the pictures, or even make a simple familiarity judgement (“this person seems familiar / unfamiliar”). However, when a measure of emotional response is taken (typically a measure of skin conductance) there tends to be an emotional response to familiar people even though no conscious recognition takes place3.”

Now, from what I remember of The man who mistook his wife for a hat, various forms of damage to the brain cause perceptual problems, including the internal perception/recall of memory. But, again, the work with rats proves that memories are not isolated in any one location in the brain.

If there is evidence that memory is distributed, then it behooves you to prove that there are exceptions to the rule. I assume that we will be speaking of the long term memories that are coded, and not short term.

This isn’t a problem, as if you locate the areas in which memory are not distributed, you locate those in which it is. Simply because memory is not distributed in every nook and cranny of the brain does not mean it isn’t distributed.

That’s exactly evidence for the holonomic paradigm. Please tell me what claims of Pribrams in the interview that I linked you disagree with. For instance, what objection do you have with

Funny, I find the conjectures to be in good accord with the data.
Why, exactly, do you say there is no substance behind the conjecture?

You claimed that there was no empirical evidence and no current way to get any.
I deluged you with cites proving the exact opposite.

I figured that there wasn’t all that much to add to the URL’s other than “So there.”

Reality and reality my man, Reality and reality.
I am sorry that you are unable to see evidence, but I see no way to deal with the inaccuracy of your perception.

Yes… my m.o. is that I back up my ideas by showing research which supports them. I see no problem with that.

Search results for journal titles: Neural computation to Nevada employment law letter:

Number of times “Neuropsych” shows up?
Zero

Would you please provide links to your cites?

Also, under APA formatting, which most research based journals use, citations for journals would be

thus:

So you either left out the title of the article, or neuropsych was the title and you left out the name of the journal. You also left out the author’s first name. Unless of course you were mixing a bit of APA formatting for bibliographical refrences with APA regulations on how to cite a bibliographic source within a paper?
[eg. (King, 2004)]

Future problems with being able to find the cites you are using would be much easier to avoid if you used APA formatting and cited your sources like I did in the OP, or linked to them.

(Alteration with brackets mine as per convention)

This is a very good question and one which I should’ve paid more attention to. It is a fact that the deeper you get into language studies, the more it becomes ‘talking about talking’.

Is it even possible to discuss language in an effective manner?
If not, what are the implications?

I personally believe that we can, but that naturally at a certain level of abstraction reality’s relationship to Reality is tenuous at best.

I am not sure we can control language. It’s one of those things where the limitations of observation, and description are inherent in the observer. We can analyze, as in the color examples, but generalizing from there to the more subtle ways that our language “colors” our perception might well be more a matter of our language controlling us, than vice versa. Language grows, adapts, and changes quite dynamically. Academic analysis would be hard pressed to even describe it in the same time frame, much less attempt real time dynamic control of it.

Description is very powerful. I can describe the sizzle of a hamburger cooking, and mention it’s aroma, and the glistening of its juices as I put it on a fresh hot bun, and suddenly, in half a dozen homes and offices in other cities, people are hungry, even though there is no hamburger. The most straightforward example of deliberate use to alter Reality I can think of is advice in modifying the behavior of a child with behavioral challenges. What I generally advise a parent is quite simple in design, and very powerful in execution if applied consistently, and thoughtfully. When your child exhibits a behavior that you do not want to increase in frequency, do not verbalize a proscriptive description of the behavior. (Such as “No! Stop running around in the house.”) With a child, the words used describe running around the house. That behavior has just been repeated in the child’s mind, once when they did it, and once when you described it. It will happen again sooner, because you described it. If you school yourself to alter your description to a behavior incompatible to running around inside the house, you do not increase the objectionable behavior, but instead increase the likelihood of the replacement behavior. (Such as, “Can you run outside, and find me a very nice red leaf?”)

It happens constantly. But you are discussing changing the acquisition of specific linguistic characteristics to achieve a stronger consensus in realities, hoping to reduce the variance from Reality. I am not certain that that is the most likely outcome.

I am skipping a few points here, out of sheer sloth.

More of a philosophical quirk of thinking, than an argument. It seems to me that the very individuality of our perceptive point of view has an inherent quality that is both a barrier, and strength. Part of what is essentially me is what I see that you do not. The same, of course is true of you. The ESP example was just a short cut to consider the possibility that that difference was minimized to an extreme extent. Or, as my grandfather used to say, “If two people agree about absolutely everything, one of them is unnecessary.”

I am reluctant to jump on the wagon with only accuracy and efficiency as inducements. Perversity, perhaps, but that is a “color” inherent in my own reality.

Problems are features of every reality, and swapping the old ones for new ones is a pretty good description of all human history.

Possibly, but identifying which ones those are is a task that would daunt Plato, or Kant.

Don’t get me wrong. I do not entirely disagree with you. I am just a bit more cautious in my expectations of how much to attribute our differing realities to linguistically induced perception since I already have lots of examples where highly similar speaking patterns express divergent world views. People are pretty fond of their own point of view, for the most part. And the more similar the world view, the more ardently folks tend to insist on their particular divergences. (Ever see the Democratic Convention?)

Tris

“Reserve your right to think, for even to think wrongly is better than not to think at all.” ~ Hypatia of Alexandria

Simply because I love Hypatia:“To rule by fettering the mind through fear of punishment in another world is just as base as to use force”

What if we give up an absolute paradigm and instead talk about influencing, or significantly influencing language, and always refining our ability to interact and improve with our tools?

A good point… how to keep strange loops out of the equation?
I suppose I am possessed of some large dose of optimism. I think that a good few of Bucky Fuller’s ideas, including ephemeralization, were on target. I subscribe to the hypothesis that the human Mind (yes, cap’) is capable of ever increasing degrees of success in Universe.

Human language really isn’t all that different. Figure out the underlying pattern, and you’ve got 'em. Admittedly you’ve got a tiger by the tail as you’d then have to continue to gather data in order keep your model current… quite a task, I’ll agree.

For an elaboration on Universal Grammar:
[Languages and Intercultural Studies - Heriot-Watt University]"UG = (short for Universal Grammar) Chomsky’s term for the common blueprint guiding and constraining the acquisition of grammars of all natural languages past and present which is available to any normal human child acquiring its mother-tongue. It is, therefore, a major object of study in theoretical linguistics. It is not a grammar in itself nor is a collection of commonalities from existing languages. UG is essentially the explanation for why child language (grammatical) development proceeds as it does, why the child does not try out the many logical possibilities for making sense of the language data (input) to which it is exposed. UG is best thought of not as a grammar but a set of limitations on how to build a natural grammar, limitations without which the child could never accomplish the feat in such a short space of time with very little help. Other types of language development, e.g. lexical development as well as facts about language use in context and language processing generally fall outside the domain of UG and are acknowledged to require other explanations. "

Yes, as I was reading that I got hungry, (and actually went out and picked up a burger), bastard :wink:
I’ll take it as an object lesson.

I don’t understand this example. Why exactly would a child repeat a behavior that they might’ve enjoyed anyway just because they hear you say something? It just sounds… off to me. Can you give me some data to back up your observation?

Again, this idea seems counterintuitive to me. (As many important concepts are to people who don’t know the data yet.)
Wouldn’t it be more accurate to say that you had channeled your child’s actions into something else (along with offering to play a ‘game’ with the child vs yelling at 'em?)? I mean… if you say “No running!” all you’re telling them is what not to do, and young kids are generally hopped up being young and vital anyways. So they feel like moving a bit, sometimes…
Again, if you’ve got some data that I’m unaware of, or can explain this more thouraghly I’d appreacite it. It just doesn’t seem to make sense to me.

I would posit that a stronger consensus on external Reality and a greater freedom of internal reality would provide maximum benefits. That is: allow for individualism, difference, change, imprecision, and doubt , and possess a consciousness of abstracting. It would increase the signal/noise ratio of reality. It would not, I do not think, remove variance from Reality, nor would it aim to. Individual points of view are, physically, different and seperate. One cannot take into view all of Universe in one glance. But perhaps being conscious of this is a first step?

The happy looking kind?
Or the kind that looks like it’s gonna eat you?
Because, if you’re not able to get to all the points because there is a sheer giant sloth the size of a freakin’ bear that’s menacing you… I suggest you shoot it. Or run away. For god’s sake, there’s a giant sloth standing next to you, run, run!!!
(ahem)

Agreed, and I think it jives with my above text. Would you agree? (not the text about the Yoda of the animal kindgom)

Agreed, this distinction, and this privelage, must be preserved.
But by the same token, there are at least certain essential aspects of Reality that we should be able to agree on. Yes?

It wouldn’t be absolutely everything. You could like potato chips while I hate 'em. You could really enjoy horror movies while I like movies that make me think. You could think that Suzy was the hottest woman on god’s green earth, I could think she wasn’t really cute. You could believe that society should provide universal healthcare, and I could believe that every person should have to pay for it on their own. You could believe in God, I could be an atheist, etc, etc, etc…

In short, what made you human, what made you you would not change or be challenged. I believe we could do this while still getting our realities into greater touch with Reality.

Would it not be better for any individual person to be able to accurately and efficiently abstract Reality? How exactly would it be harmful?

To steal a phrase, history is a nightmare from which I am trying to awaken.
Sometimes, I believe, optimism is essential.

We stand on the shoulders of giants, thanks to time binding. I for one will not accept that it is impossible to surpass what has come before. I fully believe that given sufficient time and attention to the sciences (and providing we don’t blow each other to bits), this race of upright apes will be capable of limitless progress.

Two things.
-I would agree (I think) with the first half of your sentence. I subscribe to the weak S/W/K, under which there is a certain affect of language upon thought and thought upon language, but those are not the only variables.
-Can you please shares as many of these examples as you’re able to?

But certain things are true. Fire will burn you. Lack of oxygen will kill you. Generalizations do not speak to the fundamental nature of the ‘groups’ for which they are referents. Etc…

I suspect that has more to do with politics than anything else.
But I may be wrong.

D’oh! If I find out your sloths had anything to do with my miscoding!!! ~shakes fist~

"UG = (short for Universal Grammar) Chomsky’s term for the common blueprint guiding and constraining the acquisition of grammars of all natural languages past and present which is available to any normal human child acquiring its mother-tongue. It is, therefore, a major object of study in theoretical linguistics. It is not a grammar in itself nor is a collection of commonalities from existing languages. UG is essentially the explanation for why child language (grammatical) development proceeds as it does, why the child does not try out the many logical possibilities for making sense of the language data (input) to which it is exposed. UG is best thought of not as a grammar but a set of limitations on how to build a natural grammar, limitations without which the child could never accomplish the feat in such a short space of time with very little help. Other types of language development, e.g. lexical development as well as facts about language use in context and language processing generally fall outside the domain of UG and are acknowledged to require other explanations. "

And just to be clear, this isn’t supposed to say that all languages are identical, just that they all follow the same rules and, pretty much, have the same ‘essence’.

Absolutely. A lot has to do with detecting movement and prediction. We look for what we expect to see, and are more likely to see what we expect. The believing is seeing comment is spot on, in that regard. Prediction is very useful, because it allows us to pay attention to many more things at the same time. And how importance this is you can see already in a few months old baby, who is learning to predict things like when an object of different shapes will fall off the table, just by looking at them. Some recent research uncovered some very interesting bits about this.

Yes, I think it’s possible to override, if you will, our behavior that is linked to biological components. An easy example is a prisoner who goes into a hunger strike.

I think we can certainly train ourselves to a fairly great extent to question our expectations. For instance, in the change example you gave, after you know what happens, you can pay more attention. You can develop or use alternate means of perception. You can work together with other people watching from different angles. Etc.

Asking questions where people normally don’t, goes a long way.

Are you aware of the (often duelling) fields of Generative Grammar and Functional Grammar?

I was unaware of what concepts the terms you used referred to (irony so thick you could cut it with a knife)

Upon reading up a bit, I see that GG is Chomsky’s system and FG is a question of utility.
I’m aware of both arguments, and I side, roughly, with Steven Pinker in a modified Chomskyian view, but also agree to a certain degree with the hypothesis that more highly developed linguistic skills are a sexually selected heritable trait. I don’t see the two models as mutually exclusive. It is quite possible that there are a set of natural rules that are genetically bound and interactive with various environments. This, to me, even makes more sense. A human desire to communicate does not invalidate the expression of a genetic imperative to develop an art. I think, if anything, it channels it.

To put a finer point on it:

-There is evidence that language has a genetic component (eg. three year olds are ‘grammatical geniuses.’),

-There is evidence that language has a biological component (eg. evidence that linguistic proficiency is a trait which females sexually select for, at least statistically depending on what phase of their menstrual cycle they’re in)

-There is evidence that language has a social component and that the drive towards communicative competence itself helped shape the evolution of the brain.

The way I see it, these three facts can all be true.
Do you have a differing interpretation?

(Oh, and, I’m going to see if I can’t enter the realm of Morpheus. I’ll respond to your other post some time tomorrow)

GG and FG have developed a fair bit, over time, but the general impression, among others from empirical research, is that FG is a lot more plausible description of how humans work, whereas GG is a description more of how the encoding of meaning works in grammar as a stand-alone thing, on the level that would allow a computer to parse text as exactly as possible with as little outside help as possible. At least that’s my take on it. I can definitely see the use of GG for prescriptive purposes, such as writing computer software that parses and, more importantly, improves texts. Particularly TGG fascinated me. But:

http://kybele.psych.cornell.edu/~edelman/on-Jackendoff/

Of the FG strain, the LFG is most interesting I think in terms of learning more about humans, and an overview of what’s happening in that field can be found here:

http://www-lfg.stanford.edu/lfg/

Note in particular these four categories:

  • LFG Morphosyntax
  • Optimal Syntax: LFG in an OT Setting
  • Glue: Linear Logic for meaning assembly in LFG
  • DOP-LFG: probabilistic lexical-functional analysis

Anyway, these two were in hot contention for being the latest fad in grammar theory when I was at university, and I just wondered if you heard about them.

But I think your basic premise is right, and pretty much the way I view things. Language is first of all a way of translating information in order to enable and improve communication under varying circumstances that make this useful to our species. All sorts of issues come into play here, physical, cultural, social, historical, etc. and you can use a great range of tools to look at the things that go on, stretching from neuro-biology via formal logic to group psychology and everything inbetween.

I have been contemplating my “objection” to the effort to unify language, which is what I perceive your proposal to require. (OK, perhaps only as a late term consequence, but part of the goal, in any case.) I find, as I dreamed of it last night that I have reservations about different aspects of it.

First, let me say that community is the most effective demonstrated power to unify language. Whether by war, or economics, or technology, communities with multiple languages have often become unified, and their language has begun immediately to unify as well. Spanish explorers, Conquistadors, British Empire, American economic influence, in every case, the diversity of language has decreased as the aspect of community has increased. Shared environment promulgates shared language.

Taking the example of my own country, the diversity of tongues in the United States decreases still, even five hundred years after the initial contact between cultures. The languages of the native American cultures are disappearing at a relentless, and accelerating rate.(*) Even those that are being actively preserved are dying as spoken languages, and are becoming museum pieces. The cultures that sustained them no longer exist, and the power of the common language is far too powerful to be resisted by the few remaining native speakers. It is an inevitable process. I find it sad, but I don’t know that a remedy to that sadness is possible.

(*)My Word spell and grammar checker just made my point for me! It insists that native American should be Native American. The linguistic assumption implicit in the grammar rule is that there is but one proper culture that is native to America. Thousands of languages, in dozens of major groups are included in the native American cultures, yet in our language, these are already defined by our grammatical conventions to be a single culture. In my own region, the culture which originally lived here, was not part of the Algonquin, or the Iroquois, or the many nations to its south, but rather a separate multi-tribal region specific to the Chesapeake Bay, with powerful neighbors to the north, south, and west. The Powhattan Confederation is remembered, because of their association with colonial civilization, but most of the rest simply got absorbed without any specific notice. Even the remembrance of Powhattan, and Pocahontas is almost entirely fiction, unless you are discussing the young girl forcefully abducted from her home, and sent to England, to die at the age of 21. The tribe of Powhattan and his brothers was but one of dozens in the Chesapeake basin, and their languages have gone without a trace.

Now, what you discuss is hardly similar in intent to the manifest destiny which eliminated native cultures in this continent, and I don’t accuse you of it. However, unification of language is an ongoing process, without any specific decision to do so. Taking the step of actively trying to encourage it might well be bailing with the tide.

Then there is the example of the French. The language is, by law, required to conform to regulatory control. Again, a bit more stringent, and not of the same order of magnitude to what you contemplate, but another example of applied control to language. I think the French have nailed the lid on their own language’s coffin, and in a thousand years it will be as dead as Attic Greek. Not because of any failing in the language itself, but because other languages, specifically English will not be so constrained, and will become default choices for new thinking. To quote a lexicographic source: “Language is not an abstract construction of the learned, or of dictionary makers, but is something arising out of the work, needs, ties, joys, affections, tastes, of long generations of humanity, and has its bases broad and low, close to the ground.” ~ Noah Webster ~

Such a basis will not produce languages having consistency, or accuracy as prominent characteristics. But they will have adaptability, and facility in growth that exactly matches the intellectual character of the culture that speaks that language. We say what we think. And to at least some degree, we cannot think what we cannot say. So, we make up words.

Communication requires similarity in language. As you and Chomsky point out, all languages have similarities of structure, and grammatical form, because all languages are trying to accomplish similar aims. But the differences are as important as the similarities! The map might well be the territory, when it is a map of a place no one has ever been. Sometimes the description precedes perception. Those very differences we might eliminate could well be thought crumbs dropped by someone traveling in a strange land.

Yes, there are many areas where precision of language is critical to understanding. But human interactions include a lot of other areas. Precision and accuracy are not guarantees to mutual understanding. Fuzzy thinking is an aspect of imagination. Errors in vocabulary can evolve into new understanding of old ideas.

So, while I don’t object to trying to understand what your words mean, I don’t think that the existence of differences of linguistic character is a barrier to learning. Perhaps they are signposts showing where the undiscovered country begins. It depends on how you see the difference, and what effect difference has on you.

Tris

I already used a quote, in this post. :slight_smile:

The number of times ‘neuropsychology’ showed up is left as an exercise for those with fully functional visual systems.

You will also note that things like ‘Neurosurg Rev’, ‘Neurol Sci’, etc do not show up. This remarkable phenomenon is known as ‘abbreviation’.

Luckily enough, that wasn’t your cite. You cited the journal title as “Neuropsych”, which does not exist, anywhere. Not even once.

Moreoever, number of journal titles which contain “Neuropsych”

Yes, and which one of the ten journals which start with the letters ‘neuropsych’ was this an abreviation for? What logical process was I to use to winnow them, especially since as I’ve already pointed out, the author ‘king’ and the subject ‘memory’ yields zero results.

Again, please either link to your cites or provide a citation that someone could actually use.


Sorry everybody else for cutting and running, but I had a minute in front of the comp and this was rather easy to respond to.
I really should be running along. I’ll answer respones this evening as I’m able.

Cite? (Looking through the thread, I cannot find the specific article that apparently found this. Author, year, journal, volume would be sufficient. This thread consists of so many large posts that I may hav emissed the cite. If so, I apologise).

Anyway, I will state once more that lesions to specific regions of the brain have been shown to lead to specific memory defecits. Now, very often there is not complete memory loss for a specific type of memory. However, there are differential memory losses which are, AFAIK, not consistent with a memory model where all storage is holographic. I.e., if holographic storage says that all memories are distributed over some area of cortex, then, as you note, it should not be possible to demonstrate differential memory losses via cortical lesions. Note again that there is a very large literature that demonstrates time and again that differential memory losses can and do occur. Indeed, behavioural science is replete with terms that describe specific types of memory - spatial, temporal, an so on.

Here are some more examples, where I’ll give a bit more detail since you appear unwilling to go back to the sources yourself. I have also provided a DOI, where possible.

Pierrot-Deseillney et al. (2002: Ann Neurol 52: 10-19). This article reviews some of the spatial memory literature, and reviews some of the work that has demonstrated differential memory losses. This culminates in a figure (Fig 3), where the authors propose an organisation of spatial memory into rather discrete cortical regions, with long term spatial memory proposed to be located at or near the hippocampus.

Ferreira et al. (2003; Brain Res 987: 17-24) found that lesions in the dorsal striatum of the brain differentially affected rats’ learning of fear. In particular, learning to fear a tone was impaired whereas learning to fear a location was not.

Now, you may argue that this represents differential memory-encoding processes, and not differential storage locations. However, as the authors note:

“Many studies have shown that lesions or pharmacological manipulations of the hippocampus impair spatial processing-based tasks (for a review see [20]) although some controversial about the involvement of the hippocampus in mediating spatial memory exists (for a review see on this topic [11]). In contrast, nonspatial tasks are not affected [8, 19, 20, 22, 25, 26 and 38]. Several of these nonspatial tasks are impaired by lesions in the dorsal striatum [6, 17, 23 and 24], supporting early findings that have suggested a role for the caudate-putamen in mediating learning and memory [32 and 33], and in accordance with the notion that there are multiple parallel memory systems in the brain [37].”

Note that many studies are cited which support a distinction between spatial and non-spatial memory (the bracketed numbers refer to citation in Ferreira et al.), and specifically that these can be differentially impaired by damage in different regions of the brain. The authors also make reference to multiple parallel memory processes, i.e., different cortical pathways that encode different classes of memory (e.g., spatial vs. non-spatial).

A further example is in Martin et al. (2005: Neuropsychologia 43: 609-624), who found that hippocampal lesions in the rat led to a loss of memory for a maze-type task. The authors note that:

‘These findings indicate that spatial memory for a hidden platform position in the watermaze, unlike certain forms of non-spatial memory, is permanently dependent on the integrity of the hippocampus.’

Note the suggestion that spatial versus non-spatial memory can be selectively impaired.

Humans can also show differential loss of memory (or learning). For example, Kessels et al (2004: J Neuropsychol Soc 10: 907-912) found that patients with lesions in right versus left hippocampus had different patterns of memory loss and encoding. Also, Davies et al. (2004: Eur J Neurosci 20: 2441-2446), in a study involving Alzheimer’s patients state that:

“In conclusion, atrophy of the human perirhinal cortex, and of directly connected areas, was associated with semantic memory impairment but not episodic memory impairment, as predicted from the primate work.”

(N.B: I have only read the abstract of this paper).

From your discussion so far, I do not think this agrees with a (strict) ‘holographic’ memory model, because all memories are not distributed in the same brain regions. Once again, I’ll stress that some brain lesions alter some types of memory (or memory encoding) but not others. Now, I think that memory quite possibly is distributed within certain brain regions, for specific types of memory.

I suspect you will now move on to debating memory storage versus encoding and holography, but I may be wrong.

Regardless, I very strongly suggest you look at the selective memory impairment literature before you commit so readily to your holographic and homogeneous model.

I do not see what in DeValois^2 supports a holographic memory model. BTW, DeValois^2 is a pretty standard - and good - text on visual processing, covering a decent chunk of neuropsychology.

See above. I do not see anything in DeValois^2 that suggests all memories are distributed over the same areas of the brain, nor anything that says it is not possible to selectively impair memories.

Now, can you say precisely what in DeValois^2 you think supports a holographic memory model (e.g, page no., quotes)?

If you’re just using a library catalogue browser, then that will not find the references. You’ll need to use a lit search engine, such as Web of Science, Psychlit (e.g., via Silverplatter) or Pubmed . If you’re using a similar engine, but for geared for linguistics and similar, then it will not index much psych or med stuff.

Well, you are quite correct that I was not really precise enough, but I am happy to leave Prosopagnosia if we’re going to wander down an encoding vs retrieval vs conscious awareness path - I think we’d just get bogged down even more than we already are.

Yes. It is very difficult to know where exactly the problem lies, that is true. As above, I suggest we don’t go down the Prosopagnosia path too much, unless you have particular desire to. (I’m biased - I don’t know a huge amount about Prosopagnosia).

No, it does not (but I’d appreciate the cite - forgive me if I missed it, this thread is chock-full of huge responses, easy to miss stuff). As I note above, a great deal of literature shows that selective memory impairment can and does occur.

If you are going to continue to push this point, I’d appreciate it if you could address some of the actual literature I’ve cited. In turn, I will address the paper you claim shows memory is not at all isolated (in the rat brain), when you give the cite.

See citations above (and earlier in thread). I think sticking to LTM is fine, else there’s a risk this thread will explode beyond all recognition.

I think that the weight of the literature suggests that the brain is divided into somewhat distinct processing regions, and that memory itself is not some unitary ‘module’, but instead consists of a series of ‘modules’ that encode different types of memory. Now I think it’s well accepted that memory is distributed to some extent, but there’s no convincing evidence that all memories are distributed over a single ‘holographic’ unit.

I’d prefer a specific cite - a paper, not a web site.

Well, the idea is hopefully not to find research that supports your ideas, but instead to see what the research says and build a model of the world (or brain) that fits the available research. If you think that the selective memory impairments found in the literature are either (1) wrong, or (2) actually support a holographic model (i.e., homogenous, no distinct memory ‘modules’) of memory, then can you say why? If you are simply suggesting that there might be many brain regions, each of which show some sort of specific, but distributed, memory encoding, then I won’t argue against that.

First I would remind you that as per your Prosognosia example, you can respond to a conscious test of memory as ‘not having it’ or the memory ‘having been destroyed’, but you can still subconsciously access it. I would also posit that coding for short term memories is not the same as coding for long term memories, and even there is a region of the brain which regulated the coding of/controlled memories, that does not mean those memories were localized in it.

And, to be honest, we’re getting pretty far afield of the thrust of my OP. Besides, even if the human brain cannot be said to opperate holographicaly, people are still different day to day, and FinnAgain[sub]March 9[/sub] is not FinnAgain[sub]March 10[/sub]. And while this is an interesting tangent, it’s really secondary to the main issue. Which is, of couse is

“So, the question arises, how do we best educate our children, train our own semantic reactions, and construct our verbal and written utterances in order to reach the greatest accord with Reality and the greatest evolutionary relative sucess for our species?”

If you see issues of societal import in the coding/access of memory under your model, I think that would be a good direction in which to take this tangent, otherwise I’d request that you confine your remarks to some last words, and if you want to start a new thread to discuss this I’ll be happy to post in it.

You may have the floor.

Yeah, some of the recent work on saccades is absolutely fascinating.

This division, this divide between the-mind-that-thinks and the-mind-which-questions-thoughts is something which baffles me and invites me onwards. I am vastly curious.

Cool, we seem to be in agreement.

Interesting tangent to this subject… Humans can echolocate

I personally would also like to run some experiments with people in certian mindsets and change blindness. I’m not quite sure if that would be able to be overridden in-the-moment at least, and might require reflection… curiouser and curiouser.

I would agree without reservation.

(There needs to a be a :yikes we’re using too much jargon!: smiley)
:wink:

I don’t see these as necessarily opposing factors. CG being the description of certain inborn limits/genes/brain-dispositions/what have you. And FG being the description of what (and why) we do what we do with our brains. Which in turn, work in the way they do, due to CG.

Do you think we’re on roughly the same wavelength?

Agreed.
But I think it also helps explain some of the constraints and underlying ‘nature’ of our Language. (yeep, cap’)

And, wow. I’d never read that paper before, thank you. It’s going to take me a little while to mull over, I can’t quite wrap my mind around it right now since I’ve been up all day, but I’ll think about it and I will get back to you in the next few days on this point.

Thanks for that link too. I haven’t devoted as much research time to pure linguistics… I’m going to sit down and feast upon the data, soonish. I’ll get back to ya.

I’d not heard of them while I was taking my linguistics courses. Hrm. Then again, I suppose every professor has their bias and a semester is only so much time to give a decent survey… I don’t have the free time to enroll in any linguistics graduate level courses now, but that might be an interesting thing to do over the summer.

Oh, and, what years were you at school from and 'till, I’m trying to get a sense of how current my knoweldge is… (And I take it you’re either European or Australian? You used the phrase " at university")

So the question which strikes me, and which drove my OP is, in essence,
where do we go from here?

Triskadecamus: I will get to your post soon. Scout’s honor.
erislover: I’m sorry if I was dismissive, or if I missed any points which you’d addressed to me. You seemed to have a take on this topic and I’d love to hear more.