Naively, I first thought it might be like a digital camera, with the cones and rods in the retina corresponding to the CCDs. Each cone / rod would be a pixel, and the optic nerve would convey a “raster image” of what the eye sees to the brain, for visual processing.
Then I thought, it has to be more analog than that. For one thing, we have stereoscopic vision, and also the image sent to the brain is upside-down (but I believe the brain handles these two items in “post processing”).
This question occurred to me because I was thinking of the state of the art in artificial sight for the blind, by stimulating the optic nerve with electrical impulses, only allows for vague patches of light and dark. This leads me to believe that we don’t really know what the “compression” algorithm is for human vision.
The Wikipedia article only touches on this subject tangentially.
I wanted to address your concept of “data.” We have already started processing visual information even before this information leaves the retina for the optical nerve. In general there is no seperating the data and program as is typical for most computers.
I think if you look at the Retina article it should at least give you a start in understanding of both what happens and how much is known:
[
](http://en.wikipedia.org/wiki/Retina)My vague recollection of the state of the art of artificial sight is that the major hurdles have more to do with interfacing electrical components to neural components and power related issues.
I guess that depends on how you define “post processing.” As I understand it, it’s not like there’s a specific brain module that says “Oh, and the image I’m getting is upside down, too, so let’s flip that over so we perceive the world right side up.” The brain just thinks of “up” as “the stuff coming from the bottom of the retina” and “down” as “the stuff coming from the top of the retina.”
Part of the problem is, my background is in computers, so I think of everything in digital terms. I know the human vision system is more analog than that; from what I’ve read, it seems that we should consider the optic nerve more an extension of the brain than a “cat 5 ethernet cable” that just carries signals.
Nevertheless, there must be some way that data is encoded to get it from retina -> visual cortex.
Of our perceptual systems, the most is known about the visual system - so much so that a reasonable answer to your question is “yes.” Might I suggest chapter 8 of “Computational Explorations in Cognitive Neuroscience” by O’reilly and Munakata?
I hesitate to recommend a book I haven’t read but …
Sir Francis Crick spent his last years studying consciousness and he approached it by first trying to understand how vision works. His book The Astonishing Hypothesis apparently has a good section on the neurological aspects of vision research.
A while back I remeber a study where test subjects wore glasses that fliped the image that they would see. After some time, perhaps a day, perhaps more, they saw things as normal, though they were seeing everything upsidedown.
There is so much more to vision then just seeing things. It seems to include object reconition & past experence amoung other things.
I saw a show that was showcasing some new technology that was light receptors for blind people that could be embedded in their eyes (Though I think it only had four pixels.) So I would say that we do at least now how to replicate the data–it’s just a matter of making the technology small enough.
I took several classes on perception a few years ago. I didn’t really like the texts, so I can’t make any book recommendations, but I remember learning that one of the things that is done at the retina/optic nerve level is conversion of the red/green/blue color signals that the cones receive into three separate sets of opposed signals: red/green, yellow/blue, and black/white. There’s a lot of other processing that’s done at that level as well, but I’d have to go digging into the books to recall the details. The Wikipedia article on the retina goes into some of the other stuff, if you can make it past the onslaught of jargon.
If I remember right, they didn’t actually see things as completely normal – they were able to function for the most part, but they never learned to read or write, for instance. And they reported that the whole time they had the feeling that things looked bizarre or surreal.
Also, when they took the goggles off, it took them almost as long to adjust back to the regular way of seeing things as it had taken to adjust to the goggles being on.
I found a quick reference for this: my copy of Joel Achenbach’s excellent book Why things Are, Volume II: The Big Picture. Achenbach, sort of the love child of Cecil Adams and Dave Barry, has a Straight Dope-esque column in the Washington Post (or he did … I don’t know if he’s still doing that these days).
Anyway, on page 116 of this book, paragraph 5 of his answer to the question “Why don’t we see the world upside down, since the image is flippped upside down by the lenses of our eyeballs?”, we find the following, which I’ll quote using an unconventional method of quoting in order to preserve italics, which, as we all know, are extremely vital for the preservation of life:
QUOTE
Now, let’s go to the scientific literature. There have been several experiments in which people wore funny goggles equipped with inverting prisms, causing the ground to hit the lower part of the retina and the sky to hit the upper part. The goggle wearers were initially discombobulated. They didn’t feel upside down, exactly, because all their visual cues were still lined up correctly – and gravity still pulled them toward the ground at their feet. But they still had a sense of unreality. A Japanese study in 1980, published in Kyushu Neuro-psychiatry, stated that over the course of a week, two goggle wearers gradually adjusted to the inverted world but never became completely comfortable. They never managed to read and write, and they lacked dexterity. A similar study by the Soviets, published in the September 1974 issue of the journal Voprosy Psikhologii, said it took eight days for the subjects to become accustomed to the inverted world. When the prisms were removed, it took another couple of days to set things straight again. UNQUOTE
I don’t have access to my texts today, but there are a couple of points that can be made right off the bat. First I want to reiterate that the optic nerve is not some sort of cable which connects a sensory region to a processing center. It’s more accurate and helpful to view your eyes as highly specialized protrusions from your brain - indeed one of the oldest portions - the basics having been down for 450 million years or so. The back of your retina contains a handful of different types of cells. Rods and cones detect photons and activate ganglion cells among other cell types (horizontal cells, bipolar cells, amacrine cells). These ganglion cells largely detect gradients and abrupt changes in light patterns over a small area of the retina. This is the majority of the information that is sent deeper into the brain through the optic nerve; mainly information about borders between shapes and colors. It is deeper in the brain that information about the color and movement of objects is reinterpreted from this data.
In short, no, it’s nothing like your digital camera. The information content traversing the optic nerve would be more similar to a paint-by-numbers than a photograph.
Optobionics is currently going through thier second clinical trial for a working, crude artificial retina so the technology is definately feasible. I think they’re up to about 10K pixels, B&W atm which is certainly not horrible.