The way I understood it, was that films are showing you static images, whereas the computer needs to generate images—and physics—on the fly. That is, if a man walks into a bar (in one second), a film shows 24 discrete images/slices of time as he moves from point A to point B. All the physics/light effects (e.g., whether he hits his head on the bar, how the flashing light casts shadows behind him, how other characters react, etc.) are captured in real time and simply recorded on film.
A game, however, isn’t first calculating which position the man would be in for each of those 24 images and rendering accordingly – it has to move virtual objects and calculate each collision/light effect, etc. accordingly. How fast it is figuring these things out is not necessarily related to how our eyes perceive motion, but rather how well it manipulates/redraws the objects in the field of vision.
This, of course, is not a complete explanation (traditional things do come back into play), nor is it meant to be a full-fledged GQ answer. That is, before I go on and on droning about computer graphics, I figure it’s best to throw out this tidbit and see if it gets torn to shreds—If I’m completely off base, it’s better to quickly than verbosely wrong.
Tying into the motion blur answer is the fact that stop motion animation has this same problem: each photographed frame is static, in the same way that each “frame” rendered by the computer is static.
Since movies are obviously fixed at 24fps and can’t brute-force their way around this problem in the same way that computer games can, “go motion” is sometimes used to simulate motion blur in each frame.
Animators vary the number of drawings per second depending on the action they’re trying to depict. Fast action will be animated “on ones”, which means they actually draw 24 different pictures for every second of screen time. This is expensive, so scenes with quick motion are kept to a minimum and most animation is done “on twos” or “on threes” instead.
And, of course, if something is moving really fast the animators draw it with motion blur which makes it look smoother.
Maybe, but I don’t see it, because the source images aren’t any different. Under your theory, wouldn’t it be smoother still if each frame were projected 4X (effective 96 fps with 24 distinct frames per second)? If not, where does the assistance to illusion stop?
Anyone know of any studies in this area?
There’s more to it than just fps. For action at sporting event speeds, I think most people can tell the diff between 6 fps and 24. For talking heads, not so much.
Hanna-Barbera used limited animation, where only the characters’ mouths moved, and not every frame at that. A big difference between a video game with continuous rack-'em, smack-'em action.
Actually, no. It just reduces the amount of perceived flicker. Dual (or triple) shuttering actually highlights an artifact known as strobing, in which objects moving rapidly across the screen (or the entire image, in a fast pan) appear to jerk or shudder. One of the major advantages of higher frame rates is the reduction in strobing.
An important difference between film projection and computer monitors that hasn’t been mentioned (and that is, I believe, the reason why dual-shuttering was invented) is this: When projecting film, the screen must be completely dark for a certain amount of time while the first frame is pulled down out of the gate and the next frame is pulled into place. Only when the film is stationary in the gate can the shutter open, allowing light to pass through and projecting the image on the screen.
Obviously, computer monitors, especially flat panel displays, do not have this problem. They never go completely dark. CRT displays create an image with a moving electron beam striking phosphors that glow and then fade after the beam passes; but the screen never goes completely black, as a film screen does. So the nature of the flickering in a (non-digital) movie theater is significantly different from what we experience on our monitors and TVs.
FYI, here are the highlights of some of the film systems with non-standard frame rates.
Cinerama: 3 strips of 35mm film at 26 (!) fps. Each frame was six perforations tall, instead of the usual four, making the full Cinerama image 4.5 times larger than a standard 35mm frame. Nine films were shot in the 3-strip process, including This is Cinerama (1952) and How the West Was Won (1962).
Todd-AO: 70mm film, 5 perfs tall, at 30 fps. Although about 16 films were shot in the format, only two – Oklahoma! (1955) and Around the World in Eighty Days (1956) – were shot at 30 fps. The rest were shot at 24.
Showscan. 70mm film, 5 perfs tall, at 60 fps. Invented by Douglas Trumbull, who was hoping to create the most realistic film process possible. He experimented with all frame rates from 30 to over 120. He told me once that the ideal was actually 72, above which there were almost no noticeable improvements. But 60 fps had significant practical advantages (U.S. mains current is 60 Hz, the newly developed HDTV formats used 60 Hz, etc.) and was substantially better than 24 or 30. Showscan was truly an amazing and beautiful film experience. It’s a shame it never caught on.
Only a handful of short films were made in the format. Trumbull hoped to use Showscan for certain sequences in his film Brainstorm, but as IMDB puts it, “the costs of retrofitting theaters to show it proved prohibitive.”
IMAX HD: IMAX (70mm, 15 perfs) at 48 frames per second. (Standard IMAX is 24 fps.) A cross between IMAX and Showscan. It was truly remarkable, but was introduced at about the same time as IMAX 3D, and the company decided to promote 3D more heavily. Only one film was made in the format, Momentum, and only about 16 of the worlds 250+ IMAX theaters are capable of running at 48 fps.
So that old coot was right! He really did have to swap out gearing or pulleys or something for Oklahoma!. And it must have been a sweet pain in the butt.
Of course, even without fiddling around with differing formats stuff can get hairy in the booth — we showed a single print of Fatal Attraction in five theaters on interlock, with three on one side of the booth and two on the other side. We used a couple of spare rewind stands to provide extra pulleys to guide the film around the corner.
Now that was a hairy arrangement. You got all five threaded up and hit the switch and prayed that all five lamps would light since a single lamp not coming on would shut down the whole business.
It was an unusual situation — they usually provide multiple prints for a major release.
Persistence of vision can be effective at as low as 10 fps, though I’ve also heard that 12 fps is the minimum frame rate for most people. Regular 8mm cameras and projectors typically ran at 16 fps, which was comfortably above the minimum. Super-8 typically ran at 18 fps (though many cameras also had a 24 fps option), which provided better sound.
If you ever get a chance to see a 48 fps IMAX projector, I highly recommend it (assuming you have any interest in film projection whatsoever). Any ordinary IMAX projector spins its rotor fast - the 48 fps is humming along like crazy.
Incidentally, a whole lot more people have seen a 48 fps IMAX film than realize it - the rides “Soarin’” at Epcot Center and “Soarin’ Over California” at Disney’s California Adventure are both IMAX 48 fps.
I’ve seen all the formats I mentioned in my post (including Oklahoma! in 70mm, 30 fps), and you’re absolutely right about IMAX HD. I worked for 12 years at the National Air and Space Museum, and spent a lot of time in the IMAX booth. So I was used to the sound of a regular 24 fps IMAX projector running. I made a special trip to Ottawa to see their IMAX HD system when it was installed in 1992, and was in their booth when they started one show of Momentum. It sounded like the thing was going to explode! It was unbelievable.