The Biological Basis of 3 Spacial Dimensions

Oops! That should have read “They are very different concepts” or “These are very different concepts”. Doh!

      • The example wasn’t supposed to create the illusion that jellyfish have eyes that can see the way ours do, or that they have enough marbles to chat with you about it. The point was that we are stuck with much of our perception because of the vision/brain setup that we have. We have two eyes placed side by side, and so much of our perception is based on dividing everything into “two sides”. Certain jellyfish (man 'o wars are one, I thinks) are round with eyespots all around; therefore they have no use of the terms “left” and “right”.
      • The reason their method of description is more precise is this: use, for example, any object whose location is known to you. Now imagine if you stood someone else next to you, and tried to give them directions on how to find it without using the terms “left” or “right”, or even “up” or “down” - (two terms based on gravitational influence). You will have to orient them entirely with other objects along the entire distance; hence, you will have given them a more accurate set of instructions than if you could say “turn left at this point” or “turn right at that point”.
      • He also noted that a “better” method of spacial orienting would be four different directions radiating away from each other at equal angles (as a tetrahedron) because our triple axis method is incorrect; it requires negative measures of spacial distance that can’t exist in reality. The tetrahedron method has (at least) one distance as zero but never requires negative numbers to describe where any point lies. The problem is, it isn’t instinctive; a problem he chalked up to our built-in perceptive methods. - MC.

OK, after extensive searching (minutes) I’ve turned up nearly zilch. Not to be daunted… I’ll try to make my point clear through a few developments and examples.

First, a philosophical test. Not a proof, per se, but food for thought. Let’s say that you do have a kind of vision that allows you to see around objects and potentially see them simultaneously from the front and the back… what would you call this kind of vision? Certainly not 4 dimensional vision…

By the way, computer imaging jargon for the interpretation of 3D information from 2D images is 2.5D. The image modeling uses a technique known as shape-from-x where x is one or more of the visual cues such as shading, texture, countour, focus, stereo, motion, and other photometric and geometric inverse approaches…
OK, let’s talk about what it means to see in three dimensions. I’ll start by establishing a framework for my discussion. Let’s say that the first dimension is X and this is the left-to-right dimension. The second dimension is Y and is the top-to-bottom dimension. Finally, the third dimension is Z and is the depth dimension (i.e. near or far).

Now, when I look at a scenario with one eye, the image is captured in two dimensions (X and Y). This image itself has no Z component, though through some of the visual cues mentioned above I can infer some 3D characteristics. If I open the second eye, I get a second 2D image. From this I can gather a few more visual cues, but I still have no actual Z component in my vision.

So what does it mean to add a Z component to our vision? I can visualize this in at least two ways… One I call the three degrees of freedom method and one I call the Z slice method… Hey, I’m making this up on the fly. As I said, I couldn’t find anything about this kind of stuff anywhere.

The three degrees of freedom method is a pixelated approach. If I break my 2D image down to it’s smallest X and Y components resolvable by the vision system, then in a 3D image, I have to account for the Z position of any pixel in the 3D image. Imagine that each pixel is resolved by three line segments. One that traverses the X dimension, one that traverses the Y dimension, and one that traverses the Z dimension. The intersection of these three line segments defines a 3D pixel. Just as a 2D image is the composite assemblage of pixels in 2 dimensions, a 3D image is the assemblage of pixels in 3 dimensions. Since pixels on the back sides of objects are within the legitimate 3-space, they would necessarily be visible. Another way to think of this is that the complexity of a 2D image is a square function XY and the complexity of a 3D image is a cube function XY*Z.

The Z slice method may sound somewhat familiar. It’s similar to CAT or MRI scan imaging techniques already in use today in modern medicine. The composite three dimensional image is constructed by merging information from multiple 2D planes at ever increasing depths in the Z direction. Just as computers can reassemble these images into a 3 dimensional model, so too would a 3D vision system. The only difference being that a 3D vision system should be doing this in real-time, where scanning imaging systems today must use a time slice methodology.

Constraints? Same as any 2D system. Pixels will be black if no photons are being reflected. Aspect ratios are not infinite for real world systems (i.e. we can’t see infinitely in the X and Y dimensions). Some correlary to peripheral vision could constrain a 3D vision system.

Of course, all of this is mostly hypothetical, since aside from scanning CAT, MRI, and X-Ray systems, I’m not aware of anything that even comes close to this in the real world, but hopefully I’ve made my point. True 3D vision must allow the viewing of all points on a three dimensional model simultaneously (within the constraints listed above - though there could be others).

BTW, it’s “spatial”, not “spacial”.
– tracer, who has never watched the Spacial Olympics.

tracer writes:

Ummm… actually, it’s both. Either are accepted forms. Look it up.