Why are peripheral vision tests so low contrast?

I don’t know if my optometrist’s (sp?) machine is the exception or the norm, but it’s basically a large white bowl, on edge, with pinhole sized amber lights spaced at (fairly) regular intervals within. The lights go on and off one at a time, and I’m supposed to press a button every time I see one go on.

My question is, why amber on white? Shouldn’t there be a greater contrast between the light and the background, so I can have a better shot at being able to actually tell when a light comes on? Why not a black bowl and the amber lights? Wouldn’t that make more sense?

My WAG: if you use a light on black background, you’d be able to tell if the light is on from scattered or reflected light.

IIRC peripheral vision is more sensitive to variations in light intensity. When you’re out looking at stars at night, sometimes there are those you can percieve peripherally, but looking at them directly does not reveal them.

I think Ringo is talking about averted vision, which is not really peripheral vision. The center of the retina has a high density of cones and not many rods, so low-light sensitivity is not great. The surrounding area has more rods and fewer cones. So by averting your eyes slightly (say 10 degrees) you can make out darker features. It’s a standard technique for amateur astronomy.

p.s. When you stretch your arm, your fist is about 10 degrees wide in angular size. The diameter of the moon is about half a degree. Those are useful for estimating angular separation/size of stuff.

Check out the Zeiss humphrey systems webpage. They manufacture visual field test equipment. There’s an “ask the expert” page on the site.

My WAG for the OP is that visual field testing is trying to measure the sensitivity of your peripheral vision compared to the norm. The rationale for this screening test is that it can pick up early retinal changes due to glaucoma. The greater the contrast between the stimulus and background, the more easily you’ll perceive the stimulus. At some level of contrast diference, even individuals with clinically significant peripheral field loss will still see the stimulus. Therefore, the difference in contrast employed is operationaly defined. It is set at a level that normals will perceive in the peripheral fields while individuals with early glaucoma will not. In other words, the contrast is set at a level that makes the test a clinically useful screening tool.

Looking around the Zeiss site, I found that the latest innovation in visual field testing is called blue-yellow perimetry. The stimulus is blue and is displayed against a yellow background. They claim that this color combination has a higher sensitivity in identifying glaucomatous visual field loss than the standard white on white test.

Obviously, they want the statistics they get from the tests to be meaningful, so they can be used to help diagnose vision problems. If they made it super high contrast, it would be easier – maybe so easy that lots of people get perfect scores. If they made it lower contrast, it would be harder. Presumably, they’ve come up with a test which gives a nice range of scores for most people, so they can compare scores.

In a tests like that, you want to be assured that the people with the highest score got that score because that’s what their abilities dictate – not just because they maxed out the test. Same goes with the low end. So you have to find a balance. How you do on the test isn’t really important – it’s how you do relative to what people with normal vision do.

Of course, if you make the test too hard, you add another influence: people get flustered by the perception that they’re doing badly, then do even worse.