Anyone Understand Pixar-style CGI

Does anyone on the board understand the computational model for Pixar-style computer generated animation? Specifically, can each frame be computed separately from the others? Can you break a frame into multiple rectangles and compute each of those separately from the other rectangles?

The reason I am asking, is that I am investigating the feasability of using a distributed computing model for animation and want to understand how much communication (if any) would be needed between computers working on different frames.

I would imagine that it’s possible; most of the professional-level animation programs can be used in render farms, thus they already use distributed computing to handle the workload. Whether each unit in the farm renders a separate frame or whether it just distributes task threads I don’ t know.

Raytracing is a similar technique to what the likes of Pixar use. Raytracing is also possible to distribute and gives fantastic results.

A quick Google search on “distributed raytracing system” brought up many hits, such as this one.

HTH.

Most renderfarms just divvy the frames up, because it’s simpler. How often do you have a computer in the process that can’t be dedicated to the task for an hour or so?

It is possible to have multiple machines working on the same frame, though. This is called “parallel rendering.” Most of the sweet packages don’t bother with it, though – there’s little benefit in it unless you have have fewer frames to render than you have machines. Good for test rendering, if you can distribute a single frame over thirty machines, you’ve cut down the wait significantly – but if you’re going to render a lengthy scene, it’s actually more work to divvy it up and then recomposite the image.

Pixar uses RenderMan, which is a [url=http://en.wikipedia.org/wiki/Reyes_rendering]Reyes Algorithm* implementation, at least primarily. Lately it has started to include ray-tracing capabilities. I haven’t studied graphics implementations in great enough detail that I feel comfortable explaining this.

Mental Ray, the other big animation package on the block right now, is primarily a ray-tracer, which parallelizes much better. You don’t slice the frame into rectangles so much as handle each ray independently with the same model, have each computer trace a very limited subset of all the rays you need, and then aggregate them.

As I understand it, both of them already use distributed computing models of sorts (I remember a Wired talking about the ridiculous number of SparcStations they tied up making A Bug’s Life). Either case could be parallelized across frames (you take frame 1, I take frame 2, he takes frame 3, etc), but I don’t know how much Reyes rendering can be parallelized within frames.

Doh, messed up my coding.
Reyes Algorithm
Mental Ray

Yes and yes. Although the matrix transforms required to create a viewport that’s not centered on the view vector are a little tricky to get right.

For an extreme example of this sort of parallel processing check out the PixelPlanes and PixelFlow projects. They explored using a separate processor for each pixel.

The biggest problem is that you need to distribute the entire scene – all the geometry and all the textures – to each individual graphics processor. But once you do that they can just chug away happily without passing anything back and forth between them.

Thanks to all of you, you really helped me a lot.

That was quick.