Rendering animated movies and processing hours, adjusted for time

I just ran across a little nugget of trivia in the special features for Bee Movie - they used 5,000 cores - presumably, this means their rendering farm had 2500 dual-core AMD Opteron processors. (Or maybe 1250 quad-core? They didn’t get into much detail.)

They also said that the first Shrek movie consumed 5 million processing hours. Shrek 2 ate up 10 million hours and Shrek 3 took 20 million hours. Bee Movie needed 23 million hours.

The first Shrek came out in 2001 - is there any neat way to convert 2001’s processor speeds to 2007’s speeds? If I’m using Moore’s Law in the right way, Bee Movie would have required roughly 125 million hours to render in 2001. Sound about right?

One bump - guessing this will just die quietly as the result of watching movies while doped up on antihistamines.

Geez, I can’t imagine the what the power bill would be like with 2500 stations running under full load for a long amount of time. I’m curious what the answer to the OP is also.

Less than you might expect, actually. But still more than I’d want to pay.

I did a little more digging, and the processor chips themselves consume about 68 watts of power each, and they are dual-core, so there’s 2500 x 68 watts, or 170 kilowatts to run the processors. Add to that some more watts to run RAM, networking, and disk drives, and it’s still going to be respectably low for a 2500-server datacenter.

The main efficiency comes from using blades, rather than individual servers, as they use less power than indiviidual servers and they also throw off less heat, meaning less power is needed for cooling.

Some calculations on the edge of a used napkin come up with about 1.25 megawatts to run 5000 individual servers. Add to that the general rule of thumb that it takes as much power to cool a server as it does to run a server, and you’re sucking down 2.5 megawatts. With the blades, you’re looking at about a fifth of that. You’re also looking at about one-fourth the physical space to contain them.

Does anyone else think that this is pathetically inefficient? According to imdb, the Shrek 2 was 92 minutes long. This means it had 132,480 frames (or so). Dividing this into 20,000,000 processor•hours means each frame took 151 processor•hours to render. Granted, there’s probably a lot of preparation before the final render, but, even so, that seems like a lot. There are video cards that can render 100 high-quality frames per SECOND. Seems like there is room to improve their workflow (maybe use GPUs to do the rendering)?

Z-buffered texture mapped polygons at 1200x860 do not even begin to compare to raytraced volumetric objects with surface mapping at whatever full frame film resolution is. Also, using parallel processors to split up a frame across cores requires a level of rendering reliability that GPUs just will not give. While some people are now starting to consider building real-time raytracers, they will be at screen and not even close to film resolution.

The timing seems to gel with what I heard about the CGI for LOTR - they used a large farm of commodity PCs. While blades would have higher processor density, the required framework (blade fabric, racks, networking, facilities) can be really expensive per core if you have the floorspace to use lots of cheap PCs instead.

Si

Render farm owner here.

We have about 20 GHz in total processing power (yeah, I’m not ILM). Individual frames could take many hours to process, even at television resolution. It all depends on how complicated the scene is, how many polygons are in use, how many lights are involved, how good are your shadows, is there volumetric lighting, is there radiosity, what level of antialiasing is being used, etc.

All of that stuff jacks up the time. Every light you add increases the render time proportionally. In other words a scene with 4 lights takes fully 4 times as long to render as a scene with one light.

On the house I am designing, when I turn on a decent level of radiosity and anti-alias it takes me over a day to render a single 720x480 frame on my Intel I-mac.

Remember, lot’s of computation is going on, especially with radiosity because each polygon is considering the light bouncing off of the other all the other polys it can “see”.

So with my 720x480 render taking over a day to finish, I can easily see film resolution (which runs around 2000x1000 or more) taking several days.

Also remember all the frames that are rendered, only to have the director scrap the lot because even though everything was approved in low-res video and high-res stills, there always seems to be something wrong. Also the animators have to do a lot of renders to see their work. Wireframe and test stills only get you so far. Sooner or later you have to see your scene in a way that closely resembles the finished output.

Hope this makes it clear. If you want to render cheap kid-vid quality cgi, yeah, render times are low. Shreck, on the other hand… Well this should give you an idea about how far they are going these days. Subsurface Scattering is very computation intensive.

That seems disproportionate. I can render a 1024x576 frame with basic radiosity, three lights, and reflection, in a few minutes, often less. I increase radiosity settings and it does go up a fair amount, but what kind of settings are you using to have to wait a day on only 720x480?

Well, I don’t know what renderer you are using, but I have more lights including some linear and area lighting, and a there are a lot of polys in the scene. I am using 4x4 radiosity and maximum antialias. I don’t know what you mean by “basic” radiosity, but I find the lower settings just makes the scene look weird instead of more convincing. I am using Lightwave, if it makes a difference to you. The renders come out looking pretty amazing. I am going to post some images soon.

I use Lightwave too, version 9.3. I’m not too clever with lighting and radiosity, though, so when I say basic I mean default settings of about 24 rays and an HDR light probe image. I’m not even sure it’s achieving much, I think the area lights are overwhelming the radiosity somewhat.

Looks like you’re using an earlier version of LW, though, as you’re describing it as 4x4. They’ve improved the rendering times a lot in 9.0+

Just to clarify, I’m talking about in my own renders. I have no idea what you’re doing; probably a better job than me.

Here’s an example of something I’m working on now. The background is a photograph, but the train is my own model and render.

Besides Moore’s Law, you have to consider improvements in software and algorithms.

I’m curious to what the QUESTION is. No one else read it as “Blah blah blah blah blah BEE MOVIE blah blah blah SHREK”? :confused:

Question: “How come that, despite ever-increasing processing power, 3D movies need even more ‘computer hours’ than before to render their film?”

Answer: Because of increased complexity of the 3D models and animation.

Wow, nice work. Your boiler looks a little banged up tho. Also it looked a little dark. Are you on a Mac? It uses a different gamma. I would lighten it up a little though. I hope to post some of my stuff soon.

Thanks. It is supposed to look banged up, because they bash the hell out of plate metal to build a train, but perhaps I overdid it.

As for gamma, the brightness is fine on my PC and LCD screen, and I checked the Levels in Photoshop and it’s good, so I guess it’s just one of those annoying Colour Management things that I’ve never managed to grasp.

No, the metal is formed so it is smooth. The boiler has a lot of pressure to contain, and any irregularity would create a failure point. (railroad nerd) Check out this or this loco.

It may be the monitor I am using here. It seems kind of contrasty.

Are you basing your loco on any particular prototype? Your model seems like it needs more detail. Get some photographs. There is a ridiculous amount of detail on most locomotives.

I’m not basing it on exact models, but taking details from many. And I have lots of reference photos, and most show a lot less detail than you’d think. Some have lots, some have little. I’m aiming for something in between. I even visited a railway museum and checked out all their locomotives, and I’m getting close now to how they looked. I will be adding some more detail, but not much more. And I still have to build the railway gun that it’s towing.

I think we’ve hijacked this thread into MPSIMS territory.

now a’days most rendering engines can take shortcuts in things like radiosity, SSS (sub-surface scattering), and area lights and shadows. Lots of optimizations, as well as knowing how to tweak the most out of your render settings to get the biggest bang for your buck (time wise). I rendered* this animation, using full on caustics, HDRI lighting, a few virtual lights with area shadows, and lots of hair (for the feathers on the hummingbird), with my little farm of 3 dual-core Mac Minis, at film res. Took about 30 hours, if I remember correctly. So that’s 6 cores at around 1.8GHz.

Now I’m on an 8-Core MacPro, and could have probably rendered that scene in a quarter the time. Faster processors, and more of them.

I try to keep frame render times no more than 10 mins if I can help it. But there are so many variables, and so many compromises one can take (or live with) to effect render times exponentially.

*Modeled, animated and rendered using Cinema 4D’s Advanced Render engine. I also use vray on projects as well.
.

It’s a law of computer graphics: Any state of the art render shall take one full day.

Back when Phong shading was state of the art, it took a full day to render one frame.

Whenever processor speeds catch up, some new technique will be invented to consume that power.