After having seen Avatar in both Imax 3D and Real 3D, I noticed something: the effects seemed to “pop out” of the screen in Imax, whereas they seemed to “pop in” in Real 3D (as if I were looking for a window).
Does this distinction actually exist (or did I make it up) and why?
An educated guess: Pop-out effects can go weird if they’re near the edge of the screen, since you effectively get the screen obstructing the effects, even though it’s behind them. Good 3D moviemaking techniques therefore tend to try to avoid pop-out, or if they use it to make sure that it’s restricted to something near the center of the screen. But IMAX screens are larger than ordinary screens, so you’ve got more “safe area” available away from the edges, so you can afford to make more things pop out.
Thanks, but that doesn’t qutie explain why the same movie looked different (to me) on them.
Here’s a quote I found that hopefully better explains what I’m trying to say (it also suggests there is a difference, and if he’s right, I’m curious why):
" The immersion experience with RealD is a bit different compared to what you get at IMAX 3D projections as here the depth perception is of actual depth so the action is more going inside the screen, and not popping out of it." http://3dvision-blog.com/tag/pop-out-effect/
That is a poorly worded sentence and the blog doesn’t do a lot more to describe the “feeling”. It looks to me like all 3-D techniques make the same assumption about eye width and along with that, the off set of the two frames. So none should “pop” more or less than the others. I would say popping would be more dependent on screen size, screen shape, and distance to the screen than the techniques themselves. There may be a component of the amount of light being reflected and how the edges appear, but I still think this is a quality of the movie theater and not the 3-D technology itself.
Think about it this way, if there was a way to “pop-in” to the screen, every movie would have a literal cliff hanging scene in order to take advantage of it instead of having a large animal run over the audience’s heads.
I’m not sure if I can actually answer the OP’s question but…
Imax 3D uses two projectors. The glasses are linearly polarized. The screen is huge.
Real 3D uses a DLP shutter in front of a single projector and alternates left and right images. The glasses are circularly polarized. The screens are smaller. There is a loss of screen brighness since half the image is being blocked at any given time.
I too saw Avatar in Imax 3D and Real 3D and the latter couldn’t compete with Imax - I was very disappointed.
In a 3d system whether or not and how much an object on the screen pops out or falls in is just a matter of how far and in what direction the object is offset for each eye. In theory any 3d system(that I’m familiar with at least) could do either. So my guess would be that the reason Real3d doesn’t seem to pop out is because they have the convergence, the plane where an object is at screen depth/is in the same position for both eyes, set so that the closest objects are at or near the plane of convergence.
As for why they set it that way I haven’t a clue. It could be that all Real3d theatres set it up that way or it could be that the person who set it up just felt it was more comfortable or didn’t think to play with the settings. Assuming the settings are even on the projector I’ve never seen one and don’t know how they set them up.
According to MrFloppy’s summary of how the systems differ changing Imax’s convergence would be a simple matter of moving/angling the projectors closer together/further apart. Whereas Real3d’s system could have the convergence baked into the video file. Though it could just as easily have a setting to change the convergence. You’d have to ask someone who makes or runs the projector to be sure one way or the other.
I should have been more clear. I suspect that Cameron and his studio delivered slightly different versions of the movie to IMAX and RealD theaters, to take advantage of the different technologies.
That’s not quite how it works. The convergence plane can’t be changed by moving the projectors. It’s already set in the images themselves. Moving the projectors will only ruin the effect. It takes hours to set up the projectors right, and a slight bump can screw up the setup.
Convergence plane is often changed from shot to shot, depending on the effect the filmmaker wants to create (like any other aspect of filmmaking). It’s unlikely each company would be given the raw data and the permission to set this up the way they wanted, especially on a film like Avatar. The difference the OP is seeing is probably just due to the quality of the systems and the size of the screen. There are a lot of differences in the systems (like linear polarization vs. circular), which have a number of trade-offs and cost differences. The IMAX system is probably the best of most options. It could also be possible that the setup in the Real 3d theater was screwed up in some minor way. What generally happens when something is ‘wrong’ (like an object is in front of the convergence plane but cut off by the edge of the screen) is that the brain doesn’t know how to interpret the image (some depth cues are saying the thing is close, and others are saying its far) and it just looks ‘weird’. This wouldn’t lead to pop-in, but the OP’s brain might have interpreted it that way because it just didn’t know what to make of the situation.
That doesn’t change the fact that the depth is based on how much each portion of the image is offset though. Moving the projectors in relation to each other should change the offset and therefor the convergence plane/depth range. It would just change it for the whole movie. A scene where the prominent focus was at around screen depth could be changed so it is now a little in front of the screen. But doing so would also make the scenes that were already in front of the screen further in front of the screen(possibly to the point where it would be impossible to focus on). And a scene that was deep in the scene closer. No? Not that I’m saying they actually do this. Or should as it would have the side effect of projecting part of the movie onto the wall.
But that’s beside the point. All I’m trying to say is that neither system should be any more or less capable of popping out or in. And was using the move the projectors idea as an illustration for how depth in a 3d system works and how convergence could, in theory, be tweaked. As well as speculating on why the Real3d apparently doesn’t pop out as much. Which would likely either be that someone, at some point, decided it shouldn’t or that someone screwed up.
The angle of parallax is set in the images. If the projection doesn’t match, you’ll just get ghosting. I work in video games, so it’s possible film is different, but I’m pretty sure the fundamental principles are the same. If I’m running a game level, I can change the convergence plane dynamically by changing a view variables in the game engine, but this is running real-time and all the 3d data is there. If I render out a scripted sequence though, it’s now a series of 2d images, and the convergence plane can’t be changed because the image data only exists with the camera distance and angles that I set when it was rendered (simulated obviously, there aren’t any ‘real’ cameras.) Look at an anaglyph 3d image like this one. The convergence plane is where there are no red or cyan offsets, in this case on the guy’s face. You can’t change this by the way you project it or view it or by altering the glasses. You can only change it by altering the image itself. If you had the source image, you could work it in Photoshop and render it with a different convergence plane, but the convergence is still determined by the image data.
I think it’s kind of the opposite. Our visual systems are so good at perceiving this stuff that its really hard to trick them. I read an interesting article (that I can’t seem to find now) that said we have twelve or so cues for perceiving depth, and normal movies already use nine of them. Can’t remember them all, but they vary from obvious stuff like near objects occlude far objects to more subtle cues like the colors of objects in the distance are less saturated than near ones. 3d movies only use two additional cues to achieve the effect (stereopsis and accommodation). It’s possible to mix up these cues in a way our brains can’t process because it can’t happen in the real world. If you look at the Avatar video game, the user interface or HUD renders on top of everything as HUDs always do. No matter where you go or how close you get to something, the HUD is still there, so you perceive it as being closer. However they inexplicably set the HUD deep behind the convergence plane, so when a character passes close in front of you, the HUD appears to be in front of them and behind them at the same time. It’s vaguely uncomfortable to look at, because your brain is seeing it but it’s simultaneously telling you it’s not possible.