Towards a unified theory of perception, cognition, language, Reality and realities.

Arwin - I don’t think we’re actually disagreeing on anything, we’re just emphasizing different aspects of the visual mechanism. I’m less interested in the physiology of the eyeball and it’s effects on our perceptions than I am in the interpretive hardware of the brain, and what it does with the information supplied by the eyes.

To clarify my point a little, in V.S. Ramachandran’s Phantoms in the Brain, he documents an odd case of synaesthesia. An individual claimed that he saw black-on-white crosses as coloured red. This claim was checked by producing patterns of fine-detailed black-on-white, which incorporated large figures e.g. a star, a numeral 2 etc. made out of fine crosses. A person such as you or I would take several seconds to pick out the figure from the background pattern. However, this individual would see them instantly, a big red star or numeral 2 in the middle of the pattern.

I don’t suggest this ability was particularly useful, in fact quite the opposite in that his brain hardware was providing a distracting emphasis to visual patterns that didn’t merit it. Instead, what I’m suggesting is that the “change blindness” phenomenon is a similar effect in reverse. Our brain hardware de-emphasizes unimportant changes to a person we are having a conversation with, perhaps because in terms of evolution it is more important to concentrate on other types of change.

I asked the question about whether a “victim” of the change blindness experiment would catch a ball, not because I’m interested in the different areas of the retina, but because a thrown ball is a relatively small change in a visual field compared to a complete change of clothes on a person. I suspect that even if the ball arrived over the “interviewer’s” shoulder (to eliminate any effects of eyeball physiology), the “victim” might well dodge it simply because the brain hardware determined it as important and emphasized it in the visual field.

I wonder if the “change blindness” experiment were carried out using some more survival-related changes, would there be a different result. For example, instead of changing the “interviewer’s” appearance, change their facial expression from engaging to hostile. I suspect the results might be different.

While I personally find this stuff interesting, I don’t think it is very pertinent to the OP, in that I consider most of our low-level visual interpretive hardware to be hardwired, the result of evolution, and not subject to cultural or language bias. (Of course, this thread contains some counter-examples. I could be entirely wrong!) It is the more high-level interpretations that are mostly influenced by language and culture, and which perhaps we should pay attention to.

In Orwell’s 1984, the agencies of Big Brother attempted to create the language “Newspeak” which would eliminate such concepts as “rebellion”. In Vance’s The Languages of Pao there is an attempt to create a caste of warriors by teaching them a constructed language called “Valiant”. An excerpt from the latter:

“To illustrate, consider the sentence, 'the farmer chops down a tree. In the new language the sentence becomes: ‘the farmer overcomes the inertia of the axe; the axe breaks asunder the resistance of the tree.’ The syllabary will be rich in effort-producing gutterals and hard vowels. A number of key ideas will be synonymous; such as pleasure and overcoming a resistance - relaxation and shame - outworlder and rival.”

The idea that language and culture will affect our interpretation of the world is an old one. Whether this can be said to affect our personal reality is the suject of this thread. I’m guessing at a basic level - what we see, what we hear etc. the influence is small, whereas at a higher level - what we think about what we see and hear, the influence is larger and in many cases detrimental.

As to what can be done about it - travel. Read. Learn other languages. Talk with people whose reality conflicts with yours. Reading threads in the GD forum doesn’t hurt. It’s certainly battered a few of my preconceptions apart, although whether my “reality” has been altered is harder to say…

FinnAgain - re your last post - bravo!

Heh. That’ll teach me to quote. Bravo to this!

FinnAgain - when I’m bored at work, I paste the checkershadow illusion into Paint and play with it, bridging squares A and B with the same shade of grey, seeing at what point I can force my visual hardware to re-evaluate its interpretation. I consider it a beautiful example that what you see is NOT the world - it’s a best-guess construct of the world, contrived by our visual processing hardware.

The Dawkin’s Christmas lecture was mostly about evolution, the digression into perception was again to demonstrate that what you see is not the world outside. I’ve been unable to reference the exact experiment he was talking about, although this paper contains similar experiments.

http://wexler.free.fr/papers/vis_stat.pdf

I’m not sure the implications are particularly Earth-shattering, simply counter-intuitive. When you look around you, what you are seeing is interpreted information. If I may be tiresome for a moment, what you are seeing is a map. Everything has information attached to it - a guess at what it “is”, what size it is, how far away it is, whether it is convex or concave, whether it is moving, etc. “Pencil sharper, small, on desk in front of you, nearby.” “Chair, medium-sized, left, nearby”. “Car, out the window, across the street, large, far away.” “Cannot identify, on floor, to the right, nearby.”

What is interesting is that at any one time, what is actually arriving on your retinas isn’t very detailed - the stuff right in the centre is in focus and is hitting a high density of cones, everything else is out of focus, hitting a low density, rod-cone mix, and you have a blind spot in each eye. If you were seeing it “directly”, you could imaging it as a large, left-right split-screen of two blurry images, each with a small circle in the middle in good focus, and each with a dark blob occluding part of the blurry image. But that is not what we seem to percieve - our hardware takes the two images, plays “fill in the blanks” with a lot of it and gives us what seems to be “the world” - a map showing everything we can see, with labels if they are recognisable and estimates of their size, position and motion relative to us.

How important is this? The bushman who left the forest and saw buffalo on the plains interpreted them as insects close by. Occasionally I will see a bit of dark fluff on the carpet as a spider. The hardware makes bad guesses sometimes, can be fooled other times, and kicks out nonsense when given the unfamilar and no clues. But what can you expect? If you saw reality directly all the time, the fuzzy split screen, do you think the application of your intellect to the raw data would help you do a better interpretation job than your built in hardware? I’m guessing most times it wouldn’t, and it would be much slower in any event.

I’m not sure that the cultural-specific illusions are too important. We have internal “libaries” of comparison images for identifying things. If you show us something completely outside of our experience, we’re either going to have a “what-the-fuck?” moment, and/or our hardware is going to make a best guess out of it’s “library”. But we are designed to learn fast, in such a situation we add the abnormality to our “library” and we’re set.

This is what I’m most interested in… how, exactly, does the human nervous system ‘decide’ what to pay attention to, and what not to? i assume that assumptions and area of interest as well as contrast and movement might all play factors, but is there any hard data out there right now?

That is neat… and it makes me wonder, are there any percecptual consequences in day-to-day life? I’ve not yet researched the current literature on subliminal perception, anybody else have any knowledge on this?

A good point. Would you argue that this phenomena is confined to sight? Or that in the same way the brain creates images out of the chaos of visual input, it aslo creates concepts out of the flow of events?

I don’t have a problem with grandiose per se, though it’s not really my bag. I do think we need to be careful not to either over-interpret some properties of the human (visual) system, or to carefully interpret aspects of the visual system that we have not understood fully.

For example, Arwin mentions peripheral vision’s good sensitivity to flicker (incidentally, flicker is also more visible with brighter stimuli). However, I would be vary wary of saying anything about ‘subliminal’ perception based on this, and I certainly cannot see how peripheral sensitivity to flicker would impact teaching (or the language used to teach).

There is a huge amount of hard data! Moreover, researchers in attention now tend to work within specific subgenres and there is so much data that it is not trivial to extract ‘rules’.

There are some general principles, such as information available from vision tends to dominate that available from the other senses. However, this is not universally true, and for some stimuli, vision is dominated by other senses. Further, the degree to which conscious direction of attention can overcome the influence of one perceptual system on another varies depending on the nature of the task, stimuli and senses involved.

There are some really funky results, such as: Do an experiment when someone discriminates when their forearm is touched by either one pinprick, or two. At some point when the two pinpricks are very close together, they are indistinguishable from a single pinprick. This is a 2-point discrimination task (2PDT).

So, do this experiment when the person’s arm is hidden from their view. Then, do this experiment when their arm is visible before and after the pinpricks (but not during). Then, do the experiment when their arm is visible and magnified (and, not visible during stimulation). What happens?

Well, the person is best at the task when their arm is magnified, a bit worse when their arm is visible unmagnified, and worst of all when their arm is not visiable at all. Now, bear in mind that the pinpricking device was not visible - the participants cannot see when they are being touched.

So, performance is improved when the arm is viewed - even when there is no information whatsoever about the stimulus conveyed by vision (See Kennett et al., 2001; Curr Biol 11: 1188-1191, PubMed )!

Now, what this might say about language, and specifically a language that gets closer to an elusive Reality … well, I don’t know.

That would be an interesting experiment. I don’t think anyone has tried it.

I am so sorry to be coming into this so late. Please excuse me if I missed some pertinent posts.

Agreed that we, each of us, do not know reality, but instead have some internally experience that is a perception of reality that is to a large part fictional. We assume that others have some internal experience similar to our own, and for the most part they probably do, since we are built pretty similar and experience pretty similar things. But similar is not the same as the same.

Disagreed that the evolutionarily salient goal is to more accurately percieve reality. It is instead to be better able to make predictions about future events that are salient to pursuit of goals. I will thus percieve colors and sounds preferentially that have something to do with my survival, whereas a bee will percieve other colors much better because they are salient to it. Understanding reality is well and good, but it is really making pertinent predictions that matters.

For humans a critical part of our reality is the social world. Thus we are designed to learn how to communicate with each other and to predict others’ behaviors. We are designed to be able to make predictions for future events of the behavior of others associated with particular communications.

We experience exemplars that are associated with words and from that develop prototypes of what that word means - sort of the middle of the multidimensional fuzzy space created by those exemplars. Enough round objects are called “balls” so that the child begins to call round objects “balls”.

We use abstract prototypes to focus our attention and expectations. Some are hardwired (that is how various illusions work); some are learned. But we are thereby primed to experience what we expect to experience. If those predictions fail us then we modify our prototype when we can. Calling the orange a “ball” was corrected; calling the red balloon a “ball” was corrected - the child learns to remove objects that are eaten and that float from the “ball” space. We will learn words and concepts pertinent to success within the social world in which we must exist, but my prototype of many words and concepts may not be the exact same as yours, even if our fuzzy conceptual spaces overlap enough for most practical purposes.

Yes we are hardwired to learn this and Pinker’s Words and Rules is a brilliant exposition of the process. Yes plasticity to various degrees is built in so that we can adapt to the specifics of the circumstance. In short some neural development is experience independent and some is experience expectant.

As to how attention is focused and why, yes, lots of research has been done. I’ll take a different angle than boldface. Our Drives prime us to be sensitive to that which will satisfy the drive. For example, if I am very hungry, I will be primed to attend to the smell of food even over a picture of a pretty girl. Our Prototypes, some inborn/some learned, prime us to attend to particular features. We are bootstrapped up with a set of these. Babies are designed to preferentially attend to faces and high pitched voices, doing that gets rewarded with food and warmth and slowly the prototype of a particular Mom develops. So that the drive for food and comfort is coupled then with attending to stimulii associated with her. The needy sound of “Mamama” gets associated with that entity responding and the word “mama” is refined from “I want something” to mother.

And on it goes. From the level of individual development up through societal development as well.

Before this thread totally dies, I would also like to chime in that Finnagain has a most valuable point in connecting the basic process that occurs at fundamental levels to the processes used by societies at large.

Chaos theory teaches us that many complex nonlinear systems are self-similar at different levels of analysis. “We”, from the basic processes of sensory perception within sensory organs themselves, to the level of societies, are such complex nonlinear systems.

The basic top-down/bottom-up interactive hypothetitco-inductive processes are present at multiple levels of analysis, with the power to make meaningful predictions and the trap of illusions present at all levels as well.

We would be unable to function without inductive perceptual processes but need need to be on guard for the times that they will give us false information (perceptual illusions); likewise at the societal level.

Thanks DSeid, I’ve been away on spring break and enjoying Vermont (and a very cuddly brilliant gal), so I’ve been rather busy. I don’t want this thread to die, and I when I get back to Austin tomorrow I should be able to post more.