# Binocular vision/ 3d question

We are taught that we have see in 3d because we have Binocular vision and get 2 images slightly apart, which out brain can then make that into a 3d image. OK makes sense, but is this all we use for depth perception?

2 other things come to mind:
1 - we focus our eyes to a given distance, it seems we can use this as a way to tell distance.
2 - image recognition, we know how big a object is as we have had past experence with it, we see one of those objects at a distance, and we come up with a estimated distance.
Both of these would happen automatically in the brain, we would just perceive the result.

I can give you an authoritative answer.

When only one eye works one can judge distance by re-focusing.

I only found that I had 2D vision when I went to see a film called ‘The Bubble’, it was full of special effects - but I found them boring, while others were ducking and yelling.

Certainly we use other cues to gauge distance (besides the ones you mention, I often encounter cites that we look at relative haze and scattering with increased distance, as well).

But gauging distance from such cues is very different from stereo vision. When I close one eye, the field seems to “flatten out”, even though I still know just as much as I did before I closed that eye, based on visual cues. When I look through a ViewMaster or other stereo viewer, I perceive definite distances, even when there are no other cues to tell me how far away something is (such as cartoon characters). Very clealy, stereo depth perception is a sense method that will work independently of other methods of gauging distance.
In fact, it will sometimes work despite visual cues that contradict it. It you reverse the images in a stereo viewer (or ceross your eyes when looking at one of those “Magic Eye” pictures) you wil place the left image in front of the right eye, and vice-versa. This gives you an “inside out” image , with what should be nearest the eyes being farther away, and vice-versa. Yet you see that “inside out” image, even if it seems unreal, and even when visual cues (such as one object clearly blocking part of another object, and therefore it must be closer to you) tell you what you’re seeing is wriong.

Yes, we use multiple methods. The refocussing of the eyes one isn’t very useful, unless for some reason your binocular vision is out (one eye injured or closed). Eye focus works much the same way as does two-eye parallax, but it has a baseline of only the diameter of the pupil, compared to the distance between the eyes, so it’ll be good only to a much shorter range than binocular vision, and even at those ranges, binocular vision will tell you distances more clearly.

On the other hand, any parallax-based method will have a maximum effective range. For binocular vision, given typical human visual acuity, that limit is somewhere in the vicinity of 30 feet. You can get proportionately longer distances using a larger baseline, as for instance by moving your head from side to side, or walking around, but this isn’t always practical. So beyond the maximum parallax distance, you mostly use the apparent size of objects (assuming you’re familiar with what size an object actually is), supplemented by cues like which objects are in front of others. This is good out to essentially any distance, but it’s easy to fool, if you don’t know how big something is really (or worse, if you do know, but the thing you’re looking at isn’t the usual size).

There’s a whole bunch of cues that people can use to judge distance. The following are the key ones.

<digs out old textbook to make sure I get everything>

Accommodation is refocusing of the eye.

Convergence is the angle of your eyes toward or away from your nose.

Size is regarded as being judgemental rather than perceptual. (You have to know how big the traget is in order for this to work.)

The ground plane affects perspective and texture gradients–something that you’re familiar with. Parallel lines converge in the distance; patterns get finer farther away (note that this is indepent of absolute size judgment).

Binocular disparity is your eyes getting two different pictures since they are separated in space.

Occlusion: Whether something is in front of another.

Height in visual field: Things that are farther away tend to be higher in the visual field.

Aerial perspective: Stuff gets blurry when it’s far away due to moisture and pollutants in the air.

Motion perspective: Move your head back and forth. Near objects seem to move a lot, distant objects not so much.

Different cues contribute to greater or lesser degrees at different distances. For example, convergence and accommodation are good out to about arm’s length; binocular disparity is good out to about 11 meters; but occlusion works at any distance.
Sources:

Gillam, B. (1995). The Perception of Spatial Layout from Static Optical Information. In Epstein, W. and Rogers, S (eds). Perception of Space and Motion. San Diego: Academic Press.

Cutting, J. and Vishton, P. (1995). Perceiving Layout and Knowing Distances: The Integration, Relative Potency, and Contextual use of Different Information about Depth. In Epstein, W. and Rogers, S (eds). Perception of Space and Motion. San Diego: Academic Press.

Here’s a fun little exercise. Get a friend and a football and play catch. Easy stuff. Then have your buddy pause for second. Look at him with both eyes open. Then close one eye. Everything still looks the same. With your eye still closed, have your buddy throw you the ball. If you’re not careful, it’ll smack you in the face before you realize it. Without the 3d advantage, your brain takes longer to process the velocity using only one reference point. If we couldn’t gage depth without binocular vision, we’d always get smacked in the face.