Computer graphics vs Reality

Take a modern computer game, like Far Cry 2, F.E.A.R. 2 or Crysis. While the visual quality is insane, yet the images seem somewhat unnatural. There is definitely something different than say, footage from a video camera.

I am trying to figure out what the difference is but I can’t. Is it that the colors in computer graphics are more saturated? Something with the source of lighting? What would be needed to make a computer game feel like it is real video camera footage?

Well models are generally fairly smooth. Real people are rough - they have pores, they’re asymmetrical, etc.

Take a look at these 3D models - the 1st and 3rd are pretty striking, especially the third image down which was designed on purpose to look “imperfect” as a human does. At first I honestly wasn’t sure if that was a photo or a model.

Lighting is one of the big challenges. In real life, light will illuminate something directly, but it will then reflect, scatter, or difract and illuminate other objects as well. Objects cast shadows from both direct sources of light and disperse sources.

Crysis has a very sophisticated lighting engine, but still it’s not quite right. Ideally, one could ray-trace the light as it bounces between objects, and this is done for a lot of CGI. It’s too hard to do in real-time, so there are a lot of shortcuts and approximations that verge on natural behavior of light.

In addition to the forementioned lighting issues, motion blur.

Alot of what we percieve to be smooth motion in movies is slightly blurred, even our own vision can only process things at certain speeds. CGI tends to be “too clean” to our eyes because the blurring is not there.

Every single individual element in a video game is a cheat. It approximates reality by finding a shortcut that can be reproduced real time. For that reason, it is “almost but not quite real” until the next breakthrough, where it gets even closer to reality.

If you were to look at the amazingly realistic graphics from 1999, and compare it with the amazingly realistic graphics of 2009, you would be wondering what they were thinking ten years ago that they thought they were any good - but great strides had been taken, comparatively.

Slowly they’re getting closer to understanding, and finding ways to re-create in real time, realism. In another ten years, who knows what they will have achieved.

It’s the details. The sheer, staggering amount of detail that your brain picks up from an image, that is absent from CGI (and while pre-rendered movie CGI could possibly bridge that gap some day, the day where on-the-fly 3D used in gameplay does is eons further than that).

Look at your hand. No, man, don’t take the LSD yet ! :slight_smile: Just look at your still hand, held at arm’s length. The uneven shape and texture, the million little ridges, bumps, blemishes, spots, the distorsion of the visible veins, the bone structure under it that deforms the skin and muscles as it moves, the miscellanous hair. Even the color of the skin is composed of myriads of shades.

And that’s just for your hand. Compared to a complete body, scenery or room ? And not even touching the issue of animation ? Fuggedaboudit.

So that feeling of “unreal” you get is you, unconsciously noticing there are far less details than you are used to expect, and as previously mentionned the light not acting quite right, and your brain flagging it all as suspect.

I will second this. If you have seen the demos of the computer game “Little Big Planet” that showed it with and without motion-blur and depth-of-field, the difference is striking. The blurring made it seem far more real.

Real life does not have smooth lines and hard edges like computer graphics. It has lots of gradients, textures, imperfections, fuzziness, and out-of-focus elements.

Depth-of-field is another issue. Our eyes can focus on a fairly narrow focal plane, everything else looks blurry. A computer screen or TV will have absolutely everything in focus, even though parts of the scene are supposed to be far away, and other parts far closer. That’s not too bad, but it kind of makes everything… flat, despite the most eye-popping graphics in any other case. Some games are including a depth-of-field effect, but IMO it doesn’t work very well. The game designers have to decide on an arbitrary focal plane that will be in focus, which means that if you look at anything else it’s blurry. Try to peer off into the distance? Too bad, it’s too far away. Trying to look at something right next to you? Also out of focus.

In film, the director decides what you’re looking at, and there’s a standard sort of cinematography that makes this easier for the viewer.

Yet another issue is that even the best displays don’t have a very big contrast ratio, or very many degrees of brightness in between the darkest and lightest colors. The brightest white is typically a few hundred times brighter than the darkest dark, and even the best current display technologies bring that to maybe a few thousand times brighter. And in between the brightest and darkest of each color, there’s only 256 steps.

In contrast, the real world provides us light intensities covering many more orders of magnitude. Starlight is 10^-4 lux (at the bottom of how dark humans can see), a full moon is .27 lux, well-lit interior is a few hundred lux, and direct sunlight is over 100,000 lux. Although we can’t perceive all of that at once, and our eyes adjust to the conditions. Still we can perceive a lot more contrast than displays will give us, and with much finer gradation (though 256 steps seems pretty smooth over the range that monitors can display).

In a game, the darkest room looks like a matte black piece of electronics in your living room (oh hey, I need to dust…), really just a darkish grey. And the brightest oversaturated in-game sunlight is an even white, probably less bright than the light fixtures nearby. A proper movie theater can improve on this, but only by so much.

In terms of color depth, that means that the best displays can’t provide as much visual information as we’re capable of seeing.

Some games simulate the human response to changes in brightness by HDR. Here, the graphics gives as much contrast possible to the scene, saturating bright objects in a mostly dark scene, for example. When the game moves from the dark to the light, it simulates adjusting eyes as well. It looks pretty good, but it’s still an attempt to fool us into perceiving more than is really there.

I have noticed, however, that the farther we get from the human model, the better we get at approximating reality. Take the newest Gran Turismo game, with it’s spectacular graphics engine on the PS3. The screenshots for that game were so amazing, I wasn’t sure whether I was looking at a real car or not. Granted, the photos may have been touched up.

As for the “human problem”, another issue might be the movement of a human. No human can stand perfectly still, and computer models tend to either move too little (almost any game prior to ~2005, perhaps), or too much.

What’s worse is that games that approximate the amount of autonomous movement in humans still look as though they’re moving too much, only because we’ve become accustomed to non-moving models.

Which suggests we should replace the biggest variable: people. A video game or movie that does not need to seem “realistic” to such fallible creatures as humans would be free to show ANYTHING in crystal clarity. :wink:

In addition to whatever everyone else has said, this too could be a reason.

The color gamut produced by any RGB digital video is always a sub-range of the full gamut that a normal human eye can see. In particular, the pure spectral colors of the rainbow are outside the RGB gamut. They can only be approximated instead.

I dunno, that screenshot doesn’t seem very impressive to me. It practically screams “artificial.” As Kobal2 mentioned, it’s mainly the lack of detail that’s striking: all the curves are too flat, all the surfaces too clean, all the textures too simple.

The depth of field is a nice touch, and the specular lighting is good enough that if I squint a bit it almost looks like it could be a real photo, but really it’s not that great.

Striking detail : there are no drivers :stuck_out_tongue:

And yeah, I agree - the road texture too flat and even, the shadow edges too marked, not a speck of dirt anywhere, no exhaust heat or smoke… even the tire treads on the tracks are neatly even & symetrical. Which is another pattern the human brain is “trained” to pick up : “This thing is similar to this thing ! What are the odds ? It’s significant !” goes the synaptic symphony*.

*Nerd powermetal band name.

There are drivers. You’re looking at the wrong side of the car :wink:

I’m not sure that’s really relevant, though… If I watch an episode of a live-action TV show on my computer screen, I can’t see the veins and wrinkles in the actors’ hands, either, but it still looks convincingly real, in a way that CG doesn’t. Likewise, that live-action TV show also has to contend with the limited dynamic range and all the other limitations of display on a computer screen.

This actually looked more real to me when my browser had compressed it slightly to fit thewindow than when I enlarged it to its full size. It also looked better when I used my Firefox Image Zoom extension to make it display at a bit above 100% size. At its native size everything was too ‘clean,’ but the slight blurriness introduced by squishing some pixels together (or interpolating some) actually seemed to help.

Just because you do not consciously notice these little details, it does not follow that your brain is not registering them and that they do not affect your overall impression of the scene.

It is not impossible in principle to make CGI with the same level of irregular detail as something filmed or videoed. The problem is that it takes huge amounts of memory, and a lot of work that, because of the irregularity, can’t be entirely automated.

One of the big differences between “real time” graphics (e.g. computer games) and “production” graphics (e.g. film CGFX, which are approaching photo realism, albeit with render times measured in hours) is that the former are based on rasterization. but the latter are based on ray tracing. Rasterization is inherently less realistic, particularly in how it represents lighting (there are all sorts of crazy hacks good graphics engines, such as CryEngine, use to get round the restrictions of rasterization, but at the end of the day they will never approach the kind of realism you can get with ray tracing).
However (ahem slight plug) with the advent of ray-tracing GPUs we may see the gap close in future.

This is a very good discussion, but I can’t believe we’ve gotten this far without someone mentioning the uncanny valley.

Basically, people will believe fake humanoids up to a point. I don’t know for sure, but I think this is why Pixar keeps its humans exaggerated just like Disney did. It’s easier for us to believe an almost-human (as in The Incredibles) than a REALLY-almost human (umm…wasn’t it a Final Fantasy movie? Something did the “as-realistic-as-we-can” thing recently).

It seems like for fooling the human brain unless you can get it exactly right, it’s better to miss by quite a bit rather than a little so the mistakes are not so jarring to the audience.

The most recent movie would be Beowulf I think, when I went into the theatre I didn’t know the entire movie was made with CGI, it actually took me a few minutes to realize that I wasn’t looking at any real actors.