I was thinking on my stereoscopic vision today and wondered how it came to be. I don’t mean this to be a ‘What good is 1/2 of a wing?’ type question, but it does strike me as an unusually elegant deviation from non-stereoscopic genetics. I would think that lack of focus would be weeded out.
A search didn’t turn up anything per se (given my seaching skills) and Wiki doesn’t say anything regarding the evolution of, saying only what it is and why it’s a good thing:
“Tree apes swing from branch to branch and failure to instantly judge the world in 3D could lead to a bad fall.”
…but in animals that have eyes arranged closely enough together to allow it, the brain is eventually going to work out the parallax thing.
Do we think that stereo vision was “selected for” (hard to avoid the pathetic fallacy here) or is it just a byproduct of a differently-shaped head?
We could speculate that animals with fore-facing sensory gear are in a better position to make an offensive move against an animal as soon as they see it, I guess. Having your peripheral vision directly in front of you might present some problems, if you’re a predator or an animal that defends from the front.
A rabbit or a horse avoids predation by getting the hell away as quickly as possible, so there’s no advantage in focusing on what’s directly in front of them.
A wolf’s eyes are more conveniently arranged to the fore.
Maybe depth perception is just a perk? (Traded-off with a smaller field of view, of course.)
I agree that the forward facing predator eye speculation is true, but I wouldn’t assume that forward facing means stereoscopic. I would guess like all traits it (stereo vision) was selected for, but wondering how, since non-focus (in the mutations) should be selected against.
If the eyes are oriented so that they’re close enough together, you have enough information there to work out depth.
I don’t think that there’s a specific anatomical “3D vision system” that evolved. We’ve got the data because of where our eyes are. We’ve got brains that process the data supplied. I think if we had a mutation that moved the eyes 180 degrees from each other, the developing brain would use that information to form a solid image with a greater field of view, but rely entirely on perspective for depth. You do the best with what you have.
As long as both our eyes work, I think that we learn to work out parallax depth from infancy. The visual cortex has a certain amount of self-organization. From the time that it starts receiving information, it works to form it into a coherant image of our environment.
So much of what we see is interpretive, and based on what we’ve learned from previous stimuli. For example, our experience tells us that things on the horizon are very far away. Our optical gear gives us a perspective view, which our interpretation modifies based on experience, so things on the horizon register as much bigger than our eyes are reporting. Most of the time, this gives us a “truer” picture of the world – but a byproduct is that, when the moon is close to the horizon, we automatically correct for distance and receive an impression of a much larger satellite.
I think part of our vision is learned – I don’t think an infant opens its eyes for the first time and has any concept of “near” or “far.” But if the visual cortex is consistently informed that, just before they touch your nose, objects appear much further left through the right eye than they do in the left – and this effect is graduated from the foreground to the extreme horizon – it doesn’t take long before that’s just a rule that’s incorporated into all visual processing.
What I mean was that the wiring to allow us to process that information had to evolve. Rather than the brain simply learning how to do it I would suggest a complex brain function would have to evolve to allow us to process the data coming in to be used stereoscopically. And a mutation headed in that direction wouldn’t seem particularly useful.
This is a point of some controversy. Brains are self-organizing, and stimulus modifies how neural pathways develop.
Our understanding of exactly how depth perception is developed is incomplete. We know that certain groups of neurons in the visual cortex are stimulated when presented with a binocularly-disparate image and others are suppressed, while there are other groups of neurons for which the reverse is true. We know that in very early development, the “monoscopic” neurons are much more active than the “stereoscopic” neurons. We also know that the pathways for processing stereo information develops very rapidly. We don’t know for sure if there’s a genetic mechanism for the ability to process stereo data, or if there’s just a more open-ended mechanism that’s very good at building up a model of the world based on whatever coherant information gets chucked at it.
My intuition is that this is one of those areas where the brain’s remarkable adaptability is the main thing. We know that the visual cortex rapidly adapts to different stimulus even into adulthood. It’s not exactly the same, but the correction of retinal inversion is similar. (People fitted with optics that invert their view have their visual systems “re-wired” after a few days of disorientation, so that things no longer appear “upside-down” to them – until they take the glasses off, and have to spend the same amount of time re-orienting themselves.)
It would be interesting to see if an animal without stereoscopically-seeing forebears would develop depth perception if presented with front-facing stereo images from birth. I’m picturing a compact, shielded VR-like “halter” with cameras on the front relayed to screens over the foal’s eyes. I’m pretty sure it would be able to orient itself pretty well from early on – but we might be able to determine whether or not it was able to “decode” stereo information fairly easily by spoofing its input. (eg; present it with a phantom object that, according to the stereo disparity, was rapidly approaching, without the apparent increase in size that perspective tells us is due to an object’s approach, and see if an attempt is made to evade it.)
My gut tells me that a horse’s brain is “stereo-ready,” and that it basically comes down to the general adaptability and versatility of self-organizing neural networks.
Of course, it’s possible that there’s something in our genetic heritage (apart from the general arrangement of our eyes) that’s needed to work that trick, and we just don’t know about it yet.
This would suggest that the ability to coordinate hand and eye movements in order to make and manipulate tools is probably a stronger influencing factor in the evolution of stereoscopic vision rather than the ability to judge the distance of prey. Relative size and other monoscopic techniques are much more dominant for depth perception at greater distances.
Definitely not. You can look down the street through alternating eyes and consciously note differences at at least fifty to seventy-five times that distance. Your “automatic” depth perception works over a greater distance than that.
This is a mistaken conclusion. Close tool work is what we spend most of our time using depth perception on because that’s our thing. None of the other animals who benefit from stereo vision use it toward that end to a significant degree. Predatory birds have very keen depth perception, and they make good use of it for their swoopin’ and snatchin’. Birds that have been observed using tools (Woodpecker finch, and crows) lack stereo sight.
Speaking as a one-eyed person, I agree with the quote in your post. The subtended angle is too small at other than close distances. There are sufficient visual cues to judge distances at greater range that binocular vision is not required, although it sucks the fun out of most IMAX films.
On reflection, this probably refers to the secondary depth perception of “convergence.” This isn’t even really a visual cue, having nothing to do with the light that’s falling on your retinas – it’s a specialized form of proprioperception. (Positional feedback – the same sense that allows you to know what position your limbs are in without looking at them.)
When an object is within a metre or two of your face, your eyes have to “cross” more to focus on it the closer that it gets. Your orbital muscles report their position to your brain, which does a sort of triangulation-on-the-fly from which you can infer the position of the object you’re focusing on. The greater the convergence of the eyes, the closer the object is. This sense is of no use except for close work because convergence is negligible at distances over a couple metres.
Our foveal vision is acute enough that that couple of inches of distance between the eyes provides plenty of parallax disparity at much greater distances, though.
I just happened to hear a report on the CBC morning show today that dogs have stereo smelling. And I know that the forked tongue on snakes gives them stereo smelling as well. And we, along with owls and presumably many many other animals have stereo hearing. What I conclude from this is that stereo processing in the brain is an old, old process that adapts to whatever sense is involved. Land animals’ primary sense organ is smelling while airborne (whether flying or arboreal) animals use vision as their main sense. Dogs apparently see as well as we smell and vice versa. We of course are descended from a long line of arboreal animals. Isn’t it amazing how much explanatory power inheres in evolution?