In modern 3D movies, does each eye's image have half the resolution of a non-3D film image?

I saw Prometheus in 3D a couple weeks ago, and it was the first time I’d seen modern 3D movie. (I think the last 3D I had seen before that was “Jaws 3-D” in 1983.)

While the image was good, and the 3D effects pretty cool, there was something “off” about it I couldn’t get my finger on. It was enough that I think I would have preferred to see it not in 3D.

I speculate that each image the eye is receiving has half the resolution of a normal movie image. Is this true? It might explain the “wrongness” of it.

It could also be that the glasses made the movie darker than it should have been. When I took off the glasses, the movie seemed a bit brighter (but blurred with two images of course). I would hope that they would compensate for that, though, and jack up the brightness a bit so that the final 3D image is at the intended brightness.

3D does decrease the brightness by a factor of either 2 or 4, depending on how they manage the filters at the projector. Theaters should use brighter projectors to compensate for this, but not all do.

I doubt whether resolution is really the issue, but all movie 3D inevitably seems a bit off because it simulates only one* of the several mechanisms that the visual system uses to perceive distance. It is a mistake to think that the perception of depth in normal vision depends entirely on the separation of the eyes (binocular parallax). Far from it: other depth cues are even more important in most circumstances. But 3D movies add nothing more than a binocular parallax cue to what you would see in 2D.

*Actually maybe more than one, because there are cues to depth, such as ground texture gradients and atmospheric blurring, even in conventional 2D movies. However, important things like head-motion parallax and the accommodation of the crystalline lens of the eye to focus on things at different distances, are inevitably missing in 3D movies. Thus, in fact, we get conflicting information about depth from them, which may well be why they give some people a headache.

The focusing effect only works at very short distances, and shouldn’t be relevant for a movie (2D or 3D). Head-motion parallax could be relevant, but that depends on just how much you move your head while watching a movie. And I think that all of the other depth perception cues are also found in 2D movies.

Well, binocular parallax in real life is only relevant at quite short distances too. However, in movies you are often dealing with close-up shots that are attempting to simulate looking at things at short distances. If something is depicted as close enough for binocular parallax to be relevant (and thus for the actual 3D technology to produce anything like a natural effect), it is probably close enough for focusing to be relevant too.

Related to this is is the focus/convergence issue. 3D movies (like 2D ones) require you to focus on the screen, which is at a fixed, relatively far distance. Normally, our eyes also converge so that items on each retina are the same point, and this convergence is tied to the focus distance. However, since 3D movies mess with your sense of distance by manipulating binocular disparity, they break the normal connection between focus distance and convergence. This gives many people headaches, as well as a sense of visual wrongness (since convergence itself is also a source of depth information).

Head motion parallax is a very important source of depth information, probably much more important than binocular disparity, which explains why one eyed people are in fact, scarcely impaired at all in negotiating real 3D environments, and can, for instance, drive quite safely. (This also explains why animals that do not have two front-facing eyes - i.e., most animals - also seem to be able to judge distances perfectly well, despite the fact that they do not get information from binocular disparity.) Head motion parallax is effective over a wide range of distances, near and relatively far, and yes, even people sitting still in movie seats will naturally (and often quite unconsciously) move their heads, probably, at least in part, because their visual system is attempting to get accurate depth information from the visual scene. Of course, it wont work in a movie.

As you say, and as I said, there are also other depth cues, mostly relevant to quite long distances, that operate even in 2D movies, and, indeed, in still pictures. These are irrelevant to the differences between 2D and 3D movies, except inasmuch as they explain why 2D gives us a perfectly adequate sense of depth in most circumstances. Fortunately, they do not cause the sorts of conflicts within the visual system that current 3D technology tends to produce, as they are most important at long distances where effects of both head motion parallax (except for large head motions, that you probably will not often make sitting in a chair) and binocular disparity are at a minimum.

Moved Cafe Society --> GQ, since this is a technical question, not an artistic one.

Thanks!

I think passive 3D TVs (most are active) reduce resolution by half? Autostereoscopic should, too.

This is almost certainly what caused the “wrongness” I was experiencing, thanks.

I’m still curious, however, if my supposition is true that in a modern 3D film shown in a modern theater, each eye’s image has half the resolution of a non 3D film.

If it’s the same projector as non-3D, it would have to be, right? There’s only a certain amount of resolution, and if two images are being projected, they would have to half or less resolution, wouldn’t they? Or is there resolution unused by the non-3D that the 3D takes advantage of?

Depends upon what you mean by resolution. And it gets messy. Resolution is a measure of information density. You resolve something when you can tell things apart - so in an image the ability to resolve two lines as separate entities or resolve two levels of light. If it were a digital image you can talk pixels and light dynamic range. Film and it gets a bit messier, but the basic answer is the same.

Our eye/brain may integrate together multiple frames and pull some additional detail out of the process across those frames, or at least the perception of detail (as far as you care the difference doesn’t matter, what you see is what you see). Depending upon the projection process you may be getting fewer frames per eye. Alternating frames L-R-L-R mean that you can get a drop in true frame rate per eye. So you could argue that there is a drop in total information available, and a loss of perceived resolution to each eye. But each individual frame that one eye sees is a full movie frame, with the same optical resolution as a 2D image. But the brain will also integrate the image from both eyes into the whole, and is good a this. So the left and right images will be partially recombined in the brain anyway.

All of the above is desperately handwavy. Without some proper numbers and tests it is unlikely you could make a really useful claims one way or the other. The total information content your brain gets is going to be the same to a first approximation with 3D or 2D if the total frame rate is the same. How it is able to integrate the stream when half rate split and sent to each eye is the question. It can only get worse, but how much worse is a unclear.

You could imagine a 3D system that squeezed the left and right frames net to one another on a single film frame and then used anamorphic lenses to project both, overlapping, filtered, and correct width. Each frame would have half the resolution trivially, since it used half the film area. But again, the total information available would be about the same. Whether you brain is able to reintegrate that sort of mess is another matter. It would probably do some, but I would bet not much.

I suspect most current 3D projection systems are digital, so you don’t have to worry about things like squeezing two frames side by side on the film.

It also simulates it to just a certain average value for parallax. I think. They always look a bit “off” to me, too, but that’s because they actually give me greater stereoscopic depth than I get in real life. I’m not entirely sure how that works, but it’s noticeable to me, sort of the depth perception equivalent of a color movie where the colors are made more saturated/vivid/brighter than they would be in real life. It wouldn’t surprise me if there are people the opposite way, so that modern 3D looks flatter than real life.

In other words, it’s not a perfect technology.

I think this is an erroneous assumption on your part. There are systems out there that project images of standard resolution, but twice as fast (or even four times as fast), alternating images so it goes right-image/left-image/right-image-left-image. The special glasses are polarized, as are their respective images, so that when the left eye image is projected the right eye can’t see it and vice versa. This allows the projection of 3D images at the same resolution as 2D images. It should be occurring fast enough that the brain can’t detect the “gaps”, which is how movies work in the first place anyway, by projecting a series of still images so fast that to the brain it looks like uninterupted motion.

Most commonly, images are presented alternately.

For example, with a RealD screening, there is a gimmick built onto the projector that alternately polarizes the projected image at 144 times per second. ie; if the framerate is the standard 24fps, instead of a single mono image being shown for 1/24th of a second, during that interval you’d alternately see the corresponding frames for left eye, right eye, left eye, right eye, left eye, right eye - and then on to the next pair for the next “frame.”

This works much better than older systems which used two projectors, because it is much easier to keep synchronized.

Even with old-school anaglyphic 3D (where you’re looking at a single image which contains information to be filtered for both eyes) you wouldn’t really say that there is less resolution. The fineness of the image remains the same, but you lose colour depth.

Well, in that case you are losing (potential) temporal resolution rather than spatial resolution. In either case (spatial or temporal), however, I am fairly sure modern technology is capable of producing a level of resolution in the projected picture well above what the human eye can discern from the distance between a cinema seat and the screen. Although, with everything else kept the same, a 3D projection would have half the (either spatial or temporal) resolution of a 2D movie, everything else does not have to be kept the same, and the loss of resolution can easily be compensated for.

That is why I said, initially, that resolution is unlikely to be the issue. Certainly it is a readily solvable problem, and almost certainly actually solved. The reasons for the unnatural and sometimes unpleasant look of 3D movies are those I outlined above, and they are a fundamental limitation of current 3D technology, which only even attempts to work with the binocular disparity depth cue, and fails to simulate the effects of the other depth cues that the visual system is naturally tuned to look for, thus producing an informational conflict.

Yes, they do, but there’s no reason the TV can’t be built to have double the resolution so that this isn’t an issue.

Active 3D TVs reduce the framerate by half in order to pull it off, but, again, they usually increase the total framerate to make up for this.

This is not a problem with movie projectors if they use two projectors, as the light is polarized by a lens and thus two different pixels can occupy the same space at once*, but you do have the temporal problem if they use some sort of polarization switch instead.

*Which would also be the case if you had a projection TV that used two projectors.

Ah, I did not realize that there was this alternating image process, interesting. More support for the idea that it was the focus/convergence issue that I was having trouble with, and not a resolution issue. Thanks for the info.

Do you know of a website that rates individual theaters on how well they present their 3d films? I’ve only seen one (recent) 3d movie, and the presentation was awful. It was much too dark. So, now I know not to go back to that theater, but I’d rather find out which theaters are good without wasting $12 a go at it.

This statement doesn’t really make a lot of sense. The framerate isn’t changed in real terms. Sure, “potentially,” you could sacrifice frames - but we still have unexploited room left.

I don’t think anyone is about to complain that we could enjoy a 96fps Hobbit if only we weren’t wasting all that potential for stereo, though. You have much more perceptible and useful information from stereo than from an excessive framerate.

Well, it’s true, you are losing potential temporal resolution. Just because you’ve got plenty to spare doesn’t change that fact.