State of the Art in 3D rendering: Where can I get stats?

We can render 3D scenes that can fool they eye into believing they are real scenes. We use this technology in movies like the Titanic. But video games are not quite to the point where they are indistinguishable from reality (visually).

I would like to find out what software is used to render the types of 3D fully realistic scenes such as in the Titanic.

  1. How fast we can render such a scene. I know it depends upon complexity, but I would like to be able to estimate how powerful a computer would need to be to be able to render such a scene in real time.

Whoa.

Don’t some of the special effects people develop some in-house software? Isn’t that what renderman was?

My helpfulness, alas, ends here (if it even began).

Most of my friends in the business use LightWave 3d. You can find out more at their web site.

A machine that could render such images in real time are, sadly, still in the realm of science fiction.

The software that is used for high-end CGI varies from company to company and project to project. In most cases, there are many pieces of software used on each project. Also, there is a lot of software custom developed for film projects. Just a few examples of off-the-shelf software are: Maya, Softimage XSI, 3ds max, and Lightwave. There are many more out there as well. Unfortunately, they tend to be pretty expensive. 3ds max in particular is also used very extensively in game development.

Often a film project presents enough unique challenges that custom software is developed to suit. Sometime this is as simple as a shading algorithm to better simulate a real material, sometime it is a complex physics simulation tool, sometimes it is even a complete rendering system. For example, WETA developed a system called ‘Massive’ to help generate the epic-scale battles in the Lord of the Rings movies. Massive takes care of the positioning and movement of large numbers of 3D models, so the animators don’t have to animate every individual elf, orc, human, and hobbit. IIRC, a proprietary system was developed for Titanic to create realistic ocean renderings that include the ship’s wake and waves.

In terms of render times, there are so many variables that it’s almost impossible to estimate how powerful a computer you’d need. Render times are very scene dependent. A reflective surface will cause a large increase in rendering times, as will realistic light scattering effects. In a lot of cases, we ‘cheat’ and find a way to make a scene look real (if that’s the goal!), regardless of the physcial accuracy of the way the light is being treated in the computer. But as computers get faster, we are continually increasing our demands as well.

To render realistic, and scientifically accurate scenes without cheating is still not attainable with any kind of reasonable speed. Keep in mind that to create the illusion of motion, you need a number of frames per second (film is run at 24 fps). So you need to be able to render a scene in significantly less than a second for real time playback. An high-quality architectural visualization may be indistinguishable from a photo, but it’s entirely possible for it to take dozens of hours for one still image! So, if you’re talking aobut photo-realistic rendering in real time, we still need a huge increase in computing power.

As you say in your OP, games aren’t in the realm of photoreal 3D yet. However, they are the only way we can realtime graphics.

Doom3 is almost ready for release, and it has taken enormous leaps in realistic graphics: http://www.gamespy.com/e32002/pc/doom3b/

Check out this link, and the Stats panel at the right. Those screenshots are realtime in-game images! NOT pre-rendered cut-scenes.

It’s causing quite a buzz, and could be the beginning of incredible imagery to be released in video gaming, and therefore TV and movie effects.

As for your main question, go to this website for the “State of the Art”: http://www.uemedia.com/CPC/vfxpro/

As you say in your OP, games aren’t in the realm of photoreal 3D yet. However, they are the only way we can realtime graphics.

Doom3 is almost ready for release, and it has taken enormous leaps in realistic graphics: http://www.gamespy.com/e32002/pc/doom3b/

Check out this link, and the Stats panel at the right. Those screenshots are realtime in-game images! NOT pre-rendered cut-scenes.

It’s causing quite a buzz, and could be the beginning of incredible imagery to be released in video gaming, and therefore TV and movie effects.

As for your main question, go to this website for the “State of the Art”: http://www.uemedia.com/CPC/vfxpro/

The previous posters have covered a lot of the ground here, so I’m just gonna leap in with some really wild guesses about how fast a computer would have to be to do this stuff in real time.

I’m thinking back to a render I did in Lightwave 5.5 about two years ago that turned out to be one of the longer renders I’d done to that point. It was a moody Venetian scene done for a Doctor Who CD cover. A reasonable amount of geometry, six lights, various texture and bump maps, some procedural texturing and some time-consuming volumetric light. I’m using this as an example since I think it falls in the right ballpark for complexity.

Out of curiosity I had noted down the duration of the process when it finished : 12h 37m 18s. That’s 45,438 seconds.

My render machine at the time was a PIII 700Mhz. I haven’t been able to Google any firm figures for the speed of this configuration, but I’m going to hazard a guess at around 0.5 Gigaflops.

This means that, assuming the render threads got all of the processor time, the computer executed about 2.4x10^13 floating point operations in this time.

I’m going to call real-time rendering a minumum of 24 frames per second. This means that any computer powerful enough to render my example scene this fast must be able to sustain about 5.8x10^14 floating point operations per second.

This is equal to roughly 500,000 Gigaflops. How’s that for a nice round number after pulling all those figures out of the air?

Currently the fastest computer in the world is the NEC Earth Simulator (Top 500 list, June 2002), capable of 35860 Gflops. I make that still a long way short of doing this stuff in realtime.

This assumes that you want to do a mathematically rigorous rendering of a scene. There are likely many, many hacks, tricks and cheats that can be employed, as they are now in the current game engines.

For now though, look forward to Doom III or take a look at the 3DMark software from MadOnion since it includes some quite impressive demos if you’ve the hardware to run them.

      • Stolen from another post/a 3D-graphics forum; somebody asking about why nobody does anything amateur that looks like the stuff in the Final Fantasy movie, and the explanation (here)…
        "The movie cost $137 million, and four years, to create. Square Pictures built a $50 million studio in Honolulu, with 240 employees. They spent 18 months just developing plug-ins for Maya (photo-realistic skin, hair, and cloth.)

Hardware: (this is from a Silicon Graphics press release) “Four SGI 2000 series high-performance servers, four Silicon GraphicsR Onyx2R visualization systems, 167 Silicon GraphicsR OctaneR visual workstations and other SGI systems were used to create the film. Alias|WavefrontTM MayaR software was used for animation authoring on the SGI machines, and Pixar RenderManR software was run on LinuxR OS-based systems.”

(This from an interview with Troy Brooks, Production Systems Supervisor on the film):
“All the artists have SGI Octanes on their desks (some have two!).”
“The Onyxes are used for compositing, and as a platform for our preview system, which lets the artists review full-res playback of long sequences of the movie, spooled on a (very fast) RAID array. The 16-cpu [SIXTEEN CPU!–my edit] Origin 2000s are primarily used for batch-processing MTOR jobs, which is the Maya-To-Renderman conversion.”
“We have a number of NetApp file servers, that provide most of the disk storage (approx. 4TB of primary disk space)[THAT’S FOUR TERABYTES, FOLKS!–my edit]. The renderfarm consists primarily of ~1000 Linux machines (PIII, custom-built, rack mounted), running Red Hat 6.2.”

So, as you can see, apples-and-oranges comparisons really don’t apply. Not that I didn’t enjoy the film, and it is a tour-de-force of 3D animation, I just don’t think any of us have these kind of resources lying around. 240 people collaborated on this (although at least that many would work on any major Hollywood release).

It stands as an amazing example of what can be done when you throw $137 million at a project."

I couldn’t find any official prices, but a Silicon Graphics Octane is a desktop PC that starts around $5,000 and goes up fast with options.
~

What you guys are forgetting, is that CPU’s are horribly bad at jobs like rendering scenes. While Armilla states that his P3 700 takes 1hr to render a scene, it also plays quake 2 at about the speed of a Voodoo 1 (probably quite a bit slower actually). With dedicated graphics chips, speed can be increased massively with dedicated circuitry. the Geforce 4 can already do toy-story 1esqe rendering at 30fps. It is said that toy story 2 is around 6 times more complex but the ATI Radeon 9700 is already 2 - 3 times as fast as the Geforce 4. The NV30 is meant to be significantly faster than either. While its still a big jump from TS2 to Final Fantasy, its not a HUGE one. certainly nothing that can not be handled by moores law within 2- 3 years.

The problem with graphics cards, though is that they are TOO specialised and will do stuff like approximate the accuracy of a scene to save a few clock cycles. That is unacceptable in movie rendering.

It all depends on what your benchmark for “photorealistic” is. I have no doubt that, as Salmanese says, we’ll get real-time rendering of Titanic quality (at monitor resolution) within the next decade or two.

On the other hand, the CGI in Titanic isn’t truly photorealistic–all sorts of cheats and shortcuts were used. (E.g. in the flyover of the ship, the little figures on deck don’t even have faces.) Not that it matters. The art of both movie special effects and game graphics lies in using the right combination of cheats and shortcuts so that the consumer neither knows nor care about how the product was made: he just likes it for what it is.

      • There’s actually a number of ways to trace photorealistic scenes; none are fast and easy. To produce a photo-realistic image, you have to cast lighting, bounce lighting, and cast a view; how you deal with bouncing the lighting is the key to it all. There’s different ways of doing this and it’s arguable which is the most realistic. You can use the standard, scientific formulas for light distribution, but the end result looks too clean and sterile–like a photo taken in space or on the moon. So not even the top-end software really forces you to do it “right”, they have things you can do to “soften” the lighting (if you wish). --And note that they soften the process of light rendering, it’s not as simple as just applying a “blur” filter effect to the finished image. Different depths of light have to be blurred differently to look right.
  • I have kindof been casually planning a drafting/3-D modeling app in Java, so I’ve looked into the theory quite a bit. I’m almost to the point where I know everything I need to about Java to not have any excuses to start…
    ~
    By the by, talkgraphics.com has a forum for 3-D modeling; there’s a thread there now about free 3-D modeling software, if’n anybody cares.

Ooh! A new forum to visit! Maybe they have the answer to my Lightwave lights problem…