If money were no object, how good could real-time gaming graphics be?

Just by looking at the current and imminent hardware and software how good could real-time graphics be with a framerate of at least 24 frames per second?

I know that photorealistic graphics are possible if they are prerendered but I am specifically asking about real-time graphics.
From what I understand, the real problem in having photorealistic graphics is lighting, right? You pretty much have to run a physics simulation of the light as it bounces off surfaces, takes on different colors and is absorbed by materials. This is so computationally expensive that no consumer level computer can do it in real time. Am I right so far?

Are there supercomputers which can do it in real time? How far away are we from that?

Part of the question is “photo-realistic graphics of what?”

We can render a car, for instance, easier than rendering a photo-realistic human or rendering a photo-realistic bush (unless you cheat heavily with the foliage). Rendering two cars driving around a dirt and concrete racetrack in real time is thus a lot different than two people having a foot race through the jungle.

True. Take “photorealistic” to mean whatever you think is relevant, as long as you specify what you mean by it. It’s true that we’ll get to photorealistic cars races before we get to photorealistic character-interaction heavy games.

They could be as good as Pixar movies. Generally, movies budget about 10 - 100 hours per frame but rendering loads are highly parallelizable so if you’re prepared to invest in 10 million times as many computers as a pixar rendering farm, you could render in real time.

The limit with animated movies is not computational power so much as a) algorithms that need to be developed to accurately simulate effects such as fur, water, snow, hair etc. and b) the man hours required to craft all the artwork.

I think we’re pretty much there when it comes to cars and on PC at least:

Project cars on PC:

And here is Skyrim modded on PC - keep in mind that this game uses an outdated DX9 game engine, I think a proper modern game engine will do a lot better:

I don’t think that’s really correct. Pixar movies don’t necessarily attempt to be photo-realistic. And you can already play racing games with cars that look far more photo-realistic (during actual game play) than the ones in Cars. I’d guess that you could design a racing game now where footage would fool most people into believing it was actual “real life” footage provided you had a track/background easily enough rendered to go along with it. Asphalt and high walls, sure. Trees and people, probably going to ruin the illusion.

Edit: Kinthalis beat me in posting.

I do think we’re getting there. Here is Arkham Knight on PC:

http://media.pcgamer.com/files/2014/04/Batman-Arkham-Knight.jpg

There was a university research project in the 80-90’s that did just that. It was called Pixel Planes and basically it consisted of assigning a separate graphics processor to render each pixel in an image.

That is kinda impressive: Pixel Planes 4 (1987) - YouTube (fast forward to 3:20)

When we look at a rendered face like this: http://www.geeky-gadgets.com/wp-content/uploads/2013/03/Face-Rendering-Software.jpg or this: http://www.geforce.cn/sites/default/files-world/screenshots/screenshot-1_20.jpg

What is that’s wrong? Even without being told, I’m pretty sure I would have been able to tell it was rendered but I don’t know what I’m seeing that clues me in.

Am I right that photorealism mainly comes down to calculating being color, bounced and absorbed?

Right, light interacts with objects and itself in complex ways.

As for faces… well we’ve literally evolved incredibly complex systems to read all sorts of things on people’s faces. You need to get not only the lighting right (things like global illumination, subsurface scattering, etc) but the animation of face muscles and the movement of the eyes has ot be dead on, or we can tell something is off.

Damn, I word not good.

As Kinthalis surmised, I meant “calculating light being colored, bounced and absorbed?”

The problem is not how to calculate something, it’s that often, we’re not even really sure what to calculate in the first place. For example, for a long time we had a hard time rendering realistic looking skin because the common approach was to treat skin as a surface and apply effects to the surface. It wasn’t until we realized that skin was slightly translucent and that the fat, muscle & veins beneath the skin also contributed to the color of skin that we started getting better skin rendering.

Similarly for fur, hair, water, snow etc. we’re still figuring out just how they work and how best to simulate and render them.

For example, here is a recent SIGGRAPH video on the work required to simulate snow for Disney’s Frozen. If you look carefully, the simulated snow still doesn’t behave exactly like how real snow does but it’s getting closer (and this is only for clean powder snow. Slushy snow, dirty snow, refrozen snow etc. are still open problems).

It’s this, rather than a lack of computational power that’s the main limit right now.

It’s called the Uncanny Valley.

Those are very well done!

I had some friends in grad school that worked in graphics and they asked me to look at their stuff sometimes. If I had to pick out what’s wrong with the picture you linked, I’d say the eyes are too glassy and the ears don’t have the right about of translucency (also the ears poke out a weird way). But most of the rest of the skin and mouth are very good.

I think generally a lot of the problem in rendering is when there’s more than one layer to a surface. As Shalmanese points out, skin got a lot better once we (we being the global community, not me and some dudes, I am terrible at graphics) started making the subcutaneous layers translucent and rendered. Cars don’t have layers to the paint to worry about (or not as many), so they’re much easier to get photorealistic.

Cars also consist of hard geometric surfaces with precise measurements. People and animals start getting very complex, especially when they move and you start trying to render the interactions of clothes, hair and fur.

As Kinthalis and MichaelEmouse indicated / surmised, light effects like "subsurface scattering, global illumination, radiosity, diffusion, volumetric effects (fog, lighting, etc) are all computationally intensive. It’s the reason why 80s and 90s style CGI looks flat and Avatar doesn’t.

IIRC, most games that have photorealistic (ish) graphics cheat a bit rendering these effects during production and then using maps to show them in the game dynamically. IOW, Call of Duty and Grand Theft Auto doesn’t calculate the light sources in real time.

From 1:20 onward: Female Android Geminoid F - YouTube
Looking at something similar for androids: I can easily enough tell the difference between the two there. The hair is good but the eyes and skin are too perfect. Surfaces have some wear and tear, some imperfections from their Platonic ideal which, when absent, look wrong. The skin is too uniform at a micro, macro scale and in how it reflects light.

The young woman is more believable, likely because I don’t expect her skin to be worn in. The man’s skin is wrinkly but at a smaller scale than the wrinkles, it look too fresh. I’ve heard about megatextures largely solving this problem, perhaps it’ll be for human skin. I expect we’ll see photorealistic dark skin before we see photorealistic light skin.

Perhaps if I had spent 3 decades living among apes, it would jump out at me but if I hadn’t previously known the ape was fake, I likely wouldn’t have noticed anything.

Kinthalis,

That photorealistic foliage for Skyrim, how well does it run? Is it a lot more computationally intensive?