I hate 3D movies

I get that there are 2 major methods of producing 3D movies (“shot-as” and post-production) but to me the difference isn’t as large as people try to convince me it is. The “shot-as” 3D still looks like flat layers to me, just more of them.

Maybe that doesn’t help, but it was primarily the cardboard cutouts thing that killed it for me.

I didn’t get that at all. Again, it was all animated cardboard cutouts to me. And adding to the unrealism was the fact that the 3D gave it a “soap opera effect” which made images like the one I linked to in the OP look just like, well, the image I linked to in the OP: it looked like bunch of people wearing cheesy costumes standing in front of a background. It made many of the scenes very (unintentionally) comical.

Right. I have little interest in seeing movies in 3D, and so I’ve only ever gone to the 2D screenings. After Avatar it seemed like the ratio of 3D to 2D screenings was pretty high, so it would be more difficult to find a time when a 2D screening was showing. But more recently it seems that it’s just about evenly split between 3D and 2D, and I’ve never had any problem finding a 2D screening.

I don’t have normal depth perception, and watching 3D is a LOT of work. I wear glasses, and I have to hold the 3D glasses over my specs at an angle.

Forget it.

I can’t see those stupid Magic Eye pictures either.
~VOW

They don’t use any modelling, it’s strictly working with 2D images.

You’re not going to get totally flat separations on planes, because the operator can use gradients. To visualize how this works, take a look at a depth map used for creating synthetic 3D images.

This image maps the assigned depth of each pixel of a 2D image. The brighter the pixel, the closer it is. (Imagine overlaying this image over the photo it has been “traced” from, and then using software to shift each pixel in the original image sideways by a certain amount, according to the value of the corresponding pixel in the depth map.) This example is not very detailed, and would look very “cut-out.”

When the contours of the face are more closely mapped, the depth map will look more this, and the final image will look more natural.

These maps are not the product of the software used to convert studio motion pictures to 3D, but the principle is the same - the operator has to supply depth information. Motion picture stereo conversion software comes with plenty of little shortcuts to automate things, such as taking depth information from perspective cues, or making inferences from motion parallax from lateral dollying.) Perspective cues can only take you so far, though - you need good, basic, linear geometry for it to be useful at all. The process is still largely dependent on human operators tracing the image and assigning depth information - where the software does the heavy lifting is in the interpolation between key frames, and for automating the filling in the revealed area “behind” shifted pixels.

When you have fairly complicated scenes and a lot of motion, you are necessarily going to have parts where the depth information supplied doesn’t quite match what you can intuit must be so - and this is very disconcerting/distracting.

I don’t understand how someone’s vision could be such that “watching 3D is a lot of work”. I can understand how it could fail to work for some people, but that should produce an effect no different from a 2D movie. Do you also have difficulty watching a stage play?

What I meant was, I would expect that they’d construct the depth map by using a computerized model of a head.

The problem that most people have is that the borders of the movie screen produce some disorienting images as it transitions from the flat screen (and the screen’s surroundings) to the simulated 3D image on the actual screen.

I like 3D animated movies for some reason, but live action 3D movies, count me out.

Avengers was the first modern 3D movie I’d ever seen, and I wasn’t that impressed - mostly because of how DARK it was! When I peered over my glasses there was a bright colorful movie on the screen but through them it was much dimmer and murkier. Not a fan.

You might think so, but making a perfectly-matching model and then animating it so that every frame is a perfect match for the original scene would be cost-prohibitive.

Oh, OK, that could be valid. But it’s easily-enough addressed by the filmmakers, by minimizing the number of things they place in front of the plane of the screen, and keeping those few that they do away from the edges. If they put everything behind the plane of the screen, then it shouldn’t be any different, qualitatively, from seeing a real 3D scene through a window.

The problem, at least for me, is that my eyes spend the entire movie trying desperately to reconcile the images, and since my eyes don’t work together properly, this leads to massive eye strain, nausea, and a migraine. So yes, it is difficult.

Watching REAL people, like stage actors, is not forcing my eyes to try to take multiples images and reconcile them into one.

James Cameron said on Science Friday a few weeks ago that theater brightness is a big priority.

Of course it is. That’s an inevitable result of your eyes not being in the same location.

I get motion sickness from 3D films, IMAX films, and too much shakey-cam. So yeah, I’ll get on board with this.

I haven’t been to one for a few years now, but I’ll be damned if I’m going to try again with the newer tech and pay a premium price to have to close my eyes through a whole movie while my stomach settles down.

I thought the Avengers 3D conversion was done generally well, mostly because it wasn’t realistic – it was being used as an element of cinematography. If you can’t do it perfectly, then do it artfully. There were a couple of disorienting shots, but by and large they used it appropriately to bring focus to an element of the scene. Not only by popping it forward, but sometimes by pushing it back behind other elements, like some of the shots in the lab where Stark or Banner are working surrounded by the transparent computer displays.

Certainly none of it was as terrible as the preview for the next Spider-Man film that IMAX was showing before the feature, which contains a large object coming ‘towards’ the viewer very quickly, and apparently trying very hard to intersect with your head. My eyes do not focus inside my skull, filmmakers, please respect that.

I can see where it would be massively annoying if you were expecting it to be a full 3D environment where you could change focus at will, though. In a film intended to be less fantastic, it would probably get on my nerves as well.

It’s the stupid glasses. Apparently, one of my many imperfections is that my eyes don’t align like everyone else’s. It’s not as bad as Alfred E Newman of MAD magazine fame, but it’s enough to throw off my depth perception.

In order to see 3D with the stupid glasses, I have to physically hold them AGAINST my regular eyeglasses, and then tilt them at an angle to make the images “pop” out of the screen like they do for normal people. Since I don’t go through life with a gauge that shows me exactly what the tilt angle is, I have to keep adjusting it.

And whenever I MOVE, I have to start all over again. Try watching a movie frozen in one position.

And the Magic Eye pictures are a big hoax.
~VOW

Yeah, 3D suffers mightily when combined with rapid cuts and shakycam, elements that are thankfully missing from stage plays.

Well, except for that off-Broadway performance of Agatha Christie’s Ten Little Epileptic Shutterbugs, but the less said about that night, the better.

I wonder if there is some correlation between people who can’t see the 3d images from a ‘magic eye’ picture and those who have a hard time watching the new 3d movies.

You would think there would be a study on this by now.

I can’t do the Magic Eye things for shit, but have no trouble with 3D.