Obviously, there’s no reason why anyone would bother animating something that isn’t ever going to be seen at all, but I’m wondering whether it is typical for objects that are temporarily ‘out of shot’ to continue being animated. Easier if I give a specific example:
Shrek is walking through the forest, being pestered by Donkey; in one shot, the (virtual) camera shows only his upper body, although it is clear from his gait that he is still being depicted as walking. So, in an example like this, it’s probably true that the mesh for the whole body is present, but are the legs animated, but out of shot, or are there simply no scripted movements for them at that time?
My gut feeling is that the animation is probably still being scripted, for at least a couple of reasons:
The animation of characters is rarely done piecemeal any more; instead, there are meta-scripts built up as a collection of a whole range of movements and changes to body parts, all relating to a particular gesture or action - so rather than painststakingly having to script each joint movement, the animator may be able to specify ‘step to here’ or ‘reach to here’, and have the software work out (or at least suggest a basic framework for) all the details.
The magnitude of the task of producing a feature-length animated movie is such that there are now specific roles for virtual camera operators and scene composers; this being the case, it would seem to be prudent to continue to animate Shrek’s legs, because the composition might need to be changed to include them, or pan past them.
So… I’m not sure if we have anyone here on the board who works in the CGI industry, but the question is this: do CGI Animators continue to animate the meshes of constantly-moving objects, even at times when they momentarily drop out of shot?
There are settings that tell the computer not to render objects out of frame or even out of a certain range.
In the case of Shrek, showing only his torso does not mean the animators are using only the upper body, many times a model was made already that takes into account the physics of the muscles and fat at every step. In this case the whole model is used since even if the legs are out of frame, the reaction of the body to every step was already set on the model. One can still set the areas that need to be rendered as a last step.
IOW: the action requires the whole body to be used, but at render time one can tell the computer not to bother with out of frame items; however, the computer will keep the not shown items in mind when calculating the way bones, muscles and fat react.
The rendered images are usually done in sections, one part rendered can be the shadows, then the highlights, the lights, the backgrounds*, etc, all those elements are then put together in editing, and the reason is because while computers and programs are getting so good at rendering all items in one step, in practice the CG animators do one element at a time because it is still faster to put them together in post, and some effects (like smoke in a room) are still easier to add with the editing software.
If possible, the background is done separately from the main action, also to save time at rendering.
This is what I was talking about really, I understand that not everything needs to be rendered (notwithstanding radiosity from unseen objects and all that, but that the model itself has actions that are being performed, even on parts that are not to be rendered at all.
And I would imagine the same is going to be true of things like flocks of birds; that they would have a set of behaviours that were constantly being evaluated and performed, even if they pass behind something or the camera happens to miss them for a moment (so if the camera passes through the flock, the models of the birds behind the camera would still be flapping their wings, (that is to say that the computer is still keeping track of their calculated movements) even if they are not visible at all and have no effect on the visible scene at that moment.
That’s how I imagine it being done, anyway - for something that passes in and out of shot a lot, it seems like it would be more work to actually track when it is visible or not than it would simply to animate it all the time and let it take care of itself.
Using procedurals like that is relatively new in CG animation. In the early days, with films like the first Toy Story, almost everything was hand animated (actually, for Pixar that’s probably still true for main characters), and in those cases there almost certainly wouldn’t be any animation of things once they were out of frame, except perhaps very basically in order to maintain some consistency e.g. in a walk pace.
However, these days a scene may be fully animated as a whole, and then camera placement is figured out again afterward. In those cases, it might be better to keep everything animated as it would actually save time, rather than having to try and go back in after a camera has been adjusted, and fill in any of the gaps you’ve left.
Well, “flapping” in the mathematical sense only that is. For the computer it is not a big deal to keep track, the birds are just numbers and symbols when they are outside the camera frame.
One has to think of rendering as the bottleneck of computer animation, it is essentially the step were the computer has to stop and figure out how to show the scene properly to human eyes, when one begins working in a flock usually one creates one bird and then ads some differences on the copies, the behavior and flight paths then can be constrained to generally follow the route the director wants, when one makes a scene like that, simple real time renderings are made to follow the action, then the camera can be put anywhere or even passing trough the flock.
The links to that tutorial are in PDF format to be used in Lightwave, but I noticed lots of similarities with Maya, (that is the 3D package I’m learning to use at college)
To make that work one should expect only the level of detail one sees in computer or console games. And still there is a lot of work involved, now your background work grows exponentially. Or you could reuse them like they do on Machinima.
In any case there are many limitations, and even those setups use many tricks to not render items that are out of range or out of view.
For movie or TV productions, keeping track of mathematical models for the computer is easier, animating all the action all the time is telling the computer to remember and add all the visual data to the scene, pretty soon you’ll run out of resources or the rendering would take an unacceptable amount of time.
Well, it is possible to do, but in practical terms not every computer at Pixar is a supercomputer, most photorealistic CGI still requires software rendering.
I would expect that everything in the scene is rendered at some low quality level, so that the virtual camera operator can see what he’s aiming at, and then once the cameras are set, whatever is in view of the camera is rendered at a high quality level. Is this more or less accurate?
To be specific, instead of rendering, a program like Maya shows a virtual scene were you can choose the level of detail of individual or all objects, the more objects or level of detail one uses, the slower it gets; but hardware is already at a level one can almost work in real time showing lots of detail and even textures.
(one can render the action in real time just there, it is called a “play blast”, but the results are not what you can call photo realistic, it can get close to console game quality)
A good step-by-step of how the interface and objects look before texture is added or the scene rendered is here: