How do deaf people think?

I fully understand my impression of what is going on may be completely wrong and illusory.

But I don’t think you can discard perception very easily because we don’t have too many methods to get in and figure out what going on in a brain.

So it would seem that peoples perceptions of what is going on would at least allow us to devise tests that can tease out some of the attributes of what is happening. It seems like there aren’t many alternatives, are there?

If they used the same mechanism, then they should have similar limits to their capabilities, and they should have similar impressions about the use of that mechanism.

Whether the image is due to a sighted person, or it’s something (an “image” relevant to their sensory experiences) that a blind person constructs, the important point being argued is whether that information goes through post-processing that benefits from the form of the representation created in the mind.

This is exactly what nobody knows. We don’t really know if there is any shared processing between a perceived image and a mental image?

Do you think it’s impossible that our brain is setup to channel perceived image data and mental image data through some of the same downstream processing structures?

Keep in mind I said “some”, there could be processing that can only happen on a perceived image and maybe other processing that can only happen on resolved information (whether from senses or brain). It seems premature to rule out the possibility that there could be shared processing, unless there have been tests that indicate otherwise.

An interesting tangent: one of their examples involves imagining colored circles that overlap and trying to picture what color the overlapped section would be.

My images for processing are very sparse on color, not just black and white but the only color variations seem to be in very dark blue, very dark brown or very dark grey. Although I do dream in color.

If I try to picture a red car, for example, it’s like it’s on a different screen that’s further away and harder to control, I can’t put it on the main screen, it’s like it’s in a different spot and faintly overlayed the main processing area.

In that more recent paper he has some tests of 2D and 3D imagery, one is with a cube sitting on end.

Here again, I am fully able to perform the task and get the correct result (counting certain corners) and it still looks like a 3D cube after the rotation unlike what his expectation is.

wolfpup, one of the interesting tests I read yesterday that seems to indicate mental image “perception” processing is when people are asked to imagine the letter J and then the letter D tipped on it’s back and stacked on top the letter J. People typically recognize it as an umbrella.

Do you agree with Pylyshyn that there is “no perceptual processing” of mental images?

If so, how would you explain the J, D umbrella scenario?

The J/D/umbrella thing is a fairly trite attempt to show that mental images contain sufficient information for a reinterpretation to be possible, which isn’t the question here at all. It wouldn’t be hard to write a computer program that lets you assemble random shapes which it could then match up with common household objects. Is it your contention that such a program would be an accurate simulation of how we process mental images, or give us any insight into it?

It certainly seems to be a valid question related to what is being discussed.

Did I post something somewhere that gave you the impression that your program is an accurate simulation of how we process mental images?

I don’t understand the point you are trying to make, can you explain?

Assuming you are interested in a conversation about this general topic, I’m curious about your opinion on the following:
1 - Do you think that we do not perform any perceptual processing on mental images?

2 - Are there any tests that you think would be a valid test to determine if we do perform any perceptual processing on mental images?

3 - Do you think it’s impossible that our brain is setup to channel perceived image data and mental image data through some of the same downstream processing structures?

I think D/deaf has to do with belonging to the Deaf community and using sign language vs. just having a hearing loss. People can lose their hearing at any age, or even gradually at several ages. People who are born deaf or become deaf early in life are probably more likely to end up using sign language than people who go deaf as they get older.

It’s quiet to them and if any of them are like me, IE: The Center of the Universe, how it affects others matters not a whit. :wink:

I think it has more to do with are the deaf people connected to the deaf community or not.

Consider, I remember one girl at a local high school who came from a hearing family and while she could sign, went to a normal high school and learned to lip read and speak to such a high level she once won a Shakespeare speaking contest. Now, you would think such a person would be respected within the deaf community but not so.

The issue is the deaf community where you have whole families where the deaf “gene” is passed on from parent to child and you have multiple generations of deaf people. They of course do NOT think its a disability and go to great expense to deride things like cochlear implants, lip reading, or going to schools outside “deaf” schools - which we have nearby.

So this girl was not accepted by the deaf community and was shunned when she tried to go to deaf events.

Now just try and get a member of the “militant” deaf community to admit how deafness hurts their employment opportunities, hurts their lives in general, or that schools for the deaf in general have lower academic standards than regular schools and you get a very angry response.

No, it’s absolutely not a valid question, nor is it in any way related to what is being discussed. It is an exercise that disproves something that no one in this discussion has ever claimed. No one here has ever claimed that we cannot reinterpret the information in a mental image, this is entirely a fiction that you constructed, based on your misunderstanding of a quote in a paper that you Googled, and on which misunderstanding we have tried to set you straight in posts 19, 24, 25, 32, 37, 38, 40, and 52 (at least). You still can’t seem to get past it.

Well, when I read Pylyshyn saying there is “no perceptual processing” of a mental image, I think I understand what he is saying, but I could certainly be wrong.

If you feel you have a good grasp of what Pylyshyn means with that statement, I am open to have my understanding either corrected or confirmed.

What do you believe he means by that statement?

I don’t have a “good grasp” of anything that I haven’t seen. Can you provide a link to the context in which he’s saying this?

“To summarize, then, we have argued that
the functional mental representation is not
to be identified with the input to a perceptual
stage but rather. with the output of suoh a
stage, inasmuch as it must already contain,
in some explicit manner, those cognitive
products which perception normally provides”
He does use the phrase that I quoted previously “no perceptual processing” in a different paper, but this quote works also.

Am I to conclude from this that you do not, in fact, have a link to such a statement?

I had a link when I quoted it, since then I closed my browser.

He stated both things which to me are substantially equivalent.

Do you really think there is so much difference between the statements that you want me to go searching for the quoted phrase from his other papers?

Look wolfpup, if you know Pylyshyn’s work and you have a different understanding, as I said, I am open to being corrected, just lay it out, I’m no expert.

But you seem to be nitpicking and hyper-argumentative about statements that appear to be equivalent to a person that is not an expert in the matter.

If you think you understand his summary to not be equivalent to the statement “no perceptual processing” then please just explain.

All I’m seeing in the quote you provided from that old paper is the original genesis of the argument on the fallacy of the “picture theory” of mental images that Pylyshyn further developed since then, and presented, for example, 30 years later in the paper that I linked. The argument is that there are fundamental and striking differences in the signature properties of cortical vs. mental images with respect to how they are represented and processed. You’ve been trying to counter this with descriptions of your own subjective perceptions which, to repeat yet again, is a rather pointless exercise.

You claim you can reverse-read a word from a mental image quite easily, and maybe much better than most of us can, but even you acknowledge that it’s significantly harder, so it’s clearly not the same process. (I’m still waiting to hear how you made out with “floccinaucinihilipilification”! :D) You claim that some of the presented mental image tests don’t work for you “unlike what his expectation is”, but the salient point here isn’t anyone’s “expectation” but the accumulated evidence of 30 years of experimental psychology. If you have abilities related to mental image processing much stronger than most, it doesn’t mean the evidence is wrong, it only means that you have abilities that mask these fundamental differences.

Pylyshyn is the principal advocate for a descriptive, or “propositional”, model of imagery, while on the other side of this ongoing debate Kosslyn has been the principal advocate for a depictive model in which perceptual memories are supposed to be actual spatial representations. The question of which model is correct is profoundly mired in the following conundrum: are mental images the means by which thinking is carried out, or are they the result of that process? This debate has been raging for more than three decades, and, as I said, has not been resolved by 30 years of empirical research. It frustrates me that you think the answer is simple and obvious.

I’ve never thought it was simple or obvious, I explicitly stated I don’t know how it works.

What I disagreed with were the examples that Pylyshyn used to support his position because, for me, they aren’t true. And I will emphasize that point by ignoring the word claim and focusing on the rotated cube, there is no debate about what he thinks we are capable of and it does not match with my own experience as I was quite easily able to do that task and get the right answer.

So my point was:
His examples don’t prove his point (maybe other examples do but those particular ones don’t).

A much better word example that he mentioned, at least for me, was a 3x3 grid with letters and trying to recall in any order, but a legit question is: maybe the perceptual processing is very limited to small amount of information, or dependent on the input which is a working storage with severe limits. Does the 3x3 grid thing really prove no perceptual processing of the mental image or does it just highlight thew limits of our capabilities?

Other questions:
1 - Why can’t it be a combination of models in which both propositional and depictive are active simultaneously?

2 - Do you think something like that is impossible?

That’s not the impression one would get from your post #18, which is what started this particular line of discussion. As we’ve been discussing at quite some length now, your reaction to the basic premise that “Accessing information from a mental image is different in many … ways from accessing information from a visual scene” was to say, and I quote, “… completely incorrect assumption … so completely wrong (for me) that I am just stunned someone actually wrote it down …”. Which, as has been pointed out to you by several of us many times now, was a ridiculous misinterpretation of what was being said, and led you to make a continuous series of claims implying (at least, to my reading of what you were saying) that it was obvious that mental imagery uses entirely visual mechanisms, and you knew this for sure because you had thought about it a bit. :rolleyes:

Even if this proved to be the case – for you – with a representative set of experiments under controlled conditions – something that I very much doubt – my answer would be “so what”? There is a plethora of experimental data from many decades of research showing that these things are true. Like what happens with ambiguous bistable figures. The duality of these ambiguous figures (like the necker cube mentioned in the paper; there are many others, and there are rotational examples and other variants) is very easily perceived in a physical image, but they were never perceived in the mental image in experimental studies (e.g.- Chambers and Reisberg). Yet they could be perceived again if the subject drew an image from memory. This certainly seems to argue for the propositional model of mental imagery. So does the fact that the functional attributes of Kosslyn’s supposedly depictive model have been demonstrated to exist at a normal level of functionality in people who have been blind from birth.

I have no idea. For one thing, it’s plausible that there may be significant differences in image representations in short-term vs. long-term memory. I don’t claim to be an expert in this area or to know which side of this argument is right. I do, however, think that I more or less understand what the arguments are and why the question is contentious.

I never objected to the basic premise that there are probably many differences, the part of that quote I objected to (clearly) was the word task. And I still do, based on the his cube example.

Here is my objection to using the word task as support for his position:
“That is exactly how I would read letters in any order and pretty much the only way I could do it.”

Nothing in that statement says anything about whether mental image processing is identical or completely different or whether there are substantial differences. I’ve never thought they are identical and always assumed significant differences.

Having read multiple of Pylyshyn’s papers now, I would say the following:
1 - Most of his logic and arguments seem very sound

2 - His word and cube examples are still problematic as he is trying to show that there is no perceptual processing of the mental image, but the tasks he describes, especially the cube tasks, and the word task for smaller words, are very doable.

This remains a valid objection to using those examples as support for no perceptual processing.

Ok, this is clearly where the problem lies.

Is there a post anywhere in this thread or on this board, by me, that makes anything close to that claim?

This was my early response to you:
“I understand that the “image” is not the same as drawing on a white board and is most likely tied to underlying computation and may just be a representation of whats going on, but that quote states that my mental image of “probably” doesn’t allow me to randomly order the letters, which is wrong.”

Note this key point:
“may just be a representation of whats going on”

How can you possibly misinterpret that as “it’s an entirely visual mechanism”?