The previous posters have covered a lot of the ground here, so I’m just gonna leap in with some really wild guesses about how fast a computer would have to be to do this stuff in real time.
I’m thinking back to a render I did in Lightwave 5.5 about two years ago that turned out to be one of the longer renders I’d done to that point. It was a moody Venetian scene done for a Doctor Who CD cover. A reasonable amount of geometry, six lights, various texture and bump maps, some procedural texturing and some time-consuming volumetric light. I’m using this as an example since I think it falls in the right ballpark for complexity.
Out of curiosity I had noted down the duration of the process when it finished : 12h 37m 18s. That’s 45,438 seconds.
My render machine at the time was a PIII 700Mhz. I haven’t been able to Google any firm figures for the speed of this configuration, but I’m going to hazard a guess at around 0.5 Gigaflops.
This means that, assuming the render threads got all of the processor time, the computer executed about 2.4x10^13 floating point operations in this time.
I’m going to call real-time rendering a minumum of 24 frames per second. This means that any computer powerful enough to render my example scene this fast must be able to sustain about 5.8x10^14 floating point operations per second.
This is equal to roughly 500,000 Gigaflops. How’s that for a nice round number after pulling all those figures out of the air?
Currently the fastest computer in the world is the NEC Earth Simulator (Top 500 list, June 2002), capable of 35860 Gflops. I make that still a long way short of doing this stuff in realtime.
This assumes that you want to do a mathematically rigorous rendering of a scene. There are likely many, many hacks, tricks and cheats that can be employed, as they are now in the current game engines.
For now though, look forward to Doom III or take a look at the 3DMark software from MadOnion since it includes some quite impressive demos if you’ve the hardware to run them.