It just occured to me (that’s the second thread I’ve started with those 5 words in this forum) that I’ve never seen short focal length simulated in games.
Assuming I mean what I think I mean, I am refering to the phenomenon where only that which is in the center or which you are ‘looking at’ is in focus. The distance and foreground is out of focus. It gives a photograph a very impressive sense of depth, so I figured it would be just as impressive in a 3D game.
How easy would it be to implement?? I am guessing not too difficult, as all the engine would need to do is apply blur filters to things near and far from the object in the center of the screen.
Post what otherwise unimplemented tricks/realism you’d like to see in future 3D games.
It’s called “depth of field” and most high-end cards can already do it. Not many games out use it by default because only those high end cards can handle it reliably. But the popular “gary’s mod” for HL2 fakes it even on low end cards. Keep in mind that “blur filters” are a HECK of a lot more demanding than you seem to think. Anti-aliasing is essentially a special blur trick, and bumping up the levels on that can easily slice performance in half.
There are a lot of bells and whistles that would be cool, and are doable, but the question is simply whether the price-performance tradeoffs are worth it at this point.
Correct me if I’m wrong, but wouldn’t this effect be redundant, or more annoying than anything, since our eyes already apply this “effect” to whatever part of the screen we’re looking at?
Yeah, since the entire TV screen is at the same distance (roughly) from your eyes, it’s all in focus.
Easy to implement? Sure. Easy to do real-time? Maybe. Easy to create an interface that lets you focus on anything other than the default without complicated commands or special hardware? No dice.
Since when can you look at the entire screen at once? If you’re looking at the lower-right of the screen, then the upper-left is out of focus. Like Duder said… unnecessarily redundant, and a REAL resource hog.
I’d like to see more games implement a greater view distance. Far Cry was fucking impressive in this regard… seeing a whole world operating around you goes a lot farther towards making it feel “real” than higher poly counts.
Most of the improvements to games I’d like to see now are AI-related. Every new game promises blistering and cunning AI, but even Half-Life 2’s AI was pathetic compared to what we were led to expect. You can make a game look EXACTLY like real life, but as long as the people and creatures inhabiting that game are stupid, lifeless cardboard models, it’s going to reek of gameyness.
The issue with drawing objects at a distance is that the more objects you have, the more polygons you need to render. Last I heard–and this was 2-3 years ago, so my info might be out of date–there was research going on right now on how to cut down on the number of polygons based on the distance between an object and the POV, but it was still far from being achievable in real-time.
If I look at the top right of this screen I am looking at now, the bottom left is in focus, the only thing it is not is in view the bottom left and the top right are the same distance (roughly) from my eyes so naturally they are both in focus. What is not in focus while I look at the screen is the wall behind the monitor. I wish I could find my picture of a pigeon to demonstrate this. Subject in focus
They’ve been doing this for years. The oldest example I can think of is Legend of Zelda: Ocarina of Time. The further away the camera is from Link, the main character, the less polygons he consists of. While it’s well done, if you pay attention, you can see the drop in detal.
This has also been used for environments in the Rogue Squadron games, and surely others. As with Zelda, if you look closely, you can see the detail fade in the closer to an object you get.
I know I’ve seen this effect used in real-time cinemas of certain games (not pre-renderd), but I’ve never seen it used for actual gameplay, likely because of my aforementioned reasons.
Unfortunately, I’m unable to think of the game’s that I’ve seen the effect used in. Perhaps Metal Gear Solid 2 or Resident Evil 4?
Actually, upon further review, I think that Legeld of Zelda: Wind Waker actually did use this effect in real time, only very subtly. Interestingly enough, I do recall finding that effect fairly annoying.
Yes yes, I know that, but YOU can’t tell it’s in focus unless you look at it. THAT is why it’s redundant. Or are you claiming that your peripheral vision is as clear as your… uh… nonperipheral vision? Why should game designers be so stupid as to waste precious system resources to do what your eyes are already doing anyway?
And most importantly, how do you expect to control it? Are you going to require the player to position a cursor to pinpoint where his eyes are looking?
Right, which results in almost every game having a “fog” at varying distances from the player. I mentioned Far Cry because the CryEngine was able to pump out a much larger number of polygons for the same amount of processing power… the team claimed 10 times as much (whether that’s true or not is dubious), but the end result is they managed to achieve a 1.2 kilometer draw distance. It yielded some fucking impressive visuals and vistas.
Do you deny that there is a significant difference between the pigeon and the lambs? If the same difference is implemented in games then it would be clearly visible as a difference.
And as I’ve already pointed out, my eyes are not doing it. every point on the screen is the same distance from my eyes, so every point is equally in focus. Human peripheral vision is sufficiently adequate to make the depth of field implementation in a game (let’s say a monster is the subject, so you are not interested in what is in the background, unless you turn to look at it, which would bring it into focus) significant/noticeable.
It would be an option that the user can turn on/off. If it gets too anoying for one player he can turn it off. If another player finds it as impressive an effect as I would, he might leave it on and tolerate having to look at things to see them in focus.
Hmm…I’m trying to think of how to explain this. From the user’s standpoint, either a graphics engine does something, or it doesnt. From a developer’s standpoint, it does something, it doesn’t do it, or it fakes doing it.
The Final Fantasy movie is a good example of what I have in mind. The Shrek movies represent the best level of skin detail that can be rendered with modern physics engines in a reasonable amount of time. For Final Fantasy, they faked it–they threw together a bunch of tricks to make the skin look right.
And that’s the state of polygonal simplification–right now there are tricks to do it, but there’s no cohesive method. Does that make sense?
That’s a bit different than what I originally responded to, I think, unless I’ve totally misunderstood you?
Let me precede the following statement by stating that I’m not a programmer, but I’ve kept fairly up to date on the industry. Please correct me if I’m wrong.
I’m pretty sure polygons weren’t used for the skin itself, but for the actual surface that the skin sits on, for both Final Fantasy and Shrek. The rest is all texturing, as with video games, only the movie’s texture effects are obviously far more sophisticated. Last I read, all the fine crevices, curves, depths, etc are all handled by different texture layers, such as “bump-maps.” For instance, when you see a computer generated brick wall, and you can see how some the depth between the bricks and mortar, that’s a texture that simulates light hitting at as if it were recessed further in the wall. They don’t actually build all of those out of polygons. I know that, at the least, this was done for Toy Story, and I’m fairly confident it’s still being used for movie’s today.
You didn’t misunderstand. The skin is just an example of the difference between being able to do something and faking it–it’s not particularly germane to the discussion of polygons.