Binocular vision

Why is it that things don’t look half as bright when seen through only one eye? Aside from depth perception, is binocular vision a redundancy?

BV offers wider peripheral vision.

Quite so. It gives a wider field of view. Your right eye can see things considerably further to your right than your left eye can, and vice-versa.

Indeed, most two eyed animals (vertebrates, anyway) have their eyes on either side of their head with fields of view that overlap little, if at all. This provides a much greater field of view, but does not provide any binocular depth perception. Most animals with true binocular vision - i.e., a forward facing pair of eyes - are either predators or arboreal. (Humans are, of course, predators who are fairly recently descended from arboreal primates.) Presumably this is because accurate depth perception is more important to them than being able to spot predators approaching from the side. Predators need to know exactly how far to pounce, and monkeys need to know exactly how far to jump or swing in order to catch the next branch. Other animals can get by with other less accurate or less reliable cues as to distance.

Another advantage of having two eyes, especially two with largely overlapping fields of view, is that if you lose one, you can still see reasonably well.

In terms of biological redundancy, it’s not so much a case as having “spare” organs, but that the second can function normally if the other is injured or infected. That’s where the evolutionary advantage lies.

Tackling just the first question since everybody else jumped on the second …

Brightness is a perception of how thoroughly the ambient light is saturating the eye’s sensors. Why would you expect the area seen only by the right eye to appear more dim when the left eye is closed? The left wasn’t contributing anything to that part of the scene when open.

What you will see if you close the left eye is that the scene the left *isn’t *providing now looks black. That’s where you *are *losing brightness. Again no surprise there.

So what we’re left to consider is the 50%-ish of the total visual field where both eyes do provide information to the scene when they’re open/functioning.

Vision is mostly a brain/mind phenomenon, not just a simple light sensor & storage system like a camera. For your vision to be perceived as dimmer in the overlapping area when you closed one eye, it would also have to be perceived as brighter in the shared center than it is at the individual peripheries when both eyes are open.

And that just isn’t how it works. In principle it *could *work that way, but it doesn’t. Two eyes each reporting “My average scene brightness is 75% of max”, doesn’t add up to one scene in the brain rendered at 150% of max brightness.

Also bear in mind that what we “see” is heavily image processed by the brain. Best example I saw of this, was a programme where a camera was taken into a room where the walls were covered with copies of Andy Warhol’s portrait of Marilyn Monroe. Your brain very quickly tells you the portraits are all the same. Then they panned the camera around, several were done in different colours, and a few showed completely different things in the same style.

Here’s an experiment to get at the core of the OP’s first question: Take a book into a light-sealed room, where you can very precisely control the illumination. Dim the lights to the very minimum, until you can only just barely read the book, and record what level of illumination this is (using a photometer or some other objective measurement, of course). Then repeat the experiment with an eyepatch over one eye, and record the level of illumination there. Would the second scenario require twice the brightness of the first?

Interesting note: We do get some depth perception from single eyes, here is an article:

http://machineslikeus.com/news/scientists-uncover-second-depth-perception-method-brain

And if human vision is similar to animals that have been studied, then it is heavily processed by the eyes also prior to reaching the brain. Rabbits have something like 12 different images that get transmitted to the brain, each focused on and identifying different types of attributes in the scene.