What advances in realtime 3d graphics would you like to see.

Not at all. Do you deny that if you’re looking at the bottom-left of a screen, whatever’s in the upper-right is not in focus?

If you really want to get down and dirty about it, it’s MORE unrealistic to do it your way… because in real life, things don’t get all hazy and distorted just because someone’s not looking at it.

The only way to accurately simulate what you want to do is with holographics or such.

But how would it be CONTROLLED when turned on? If I’m staring at a leaf and instead want to focus on the distant mountains, I just slightly shift how my eyes are positioned. It’s not even conscious… it’s something my brain does automatically.

In a game, how do you change between wanting to stare at something nearby and something far away?

For the love of God, man! I’m not gonna get pissed just 'cuz you want a snazzy effect in your games! :smiley:

Not ‘someone’, you If you are stood on a beach looking at your finger, the vast majority of the beach, even the bit right ‘next to’ your finger is completely out of focus.

If photoshop can do it then it is theoretically possible for a real-time graphics engine to do it. I dabled in 3d gaming at University and from my understanding, if the hardware were available, it would simply be a case of finding out what the crosshair or ‘reticle’ is pointing at and then bringing everything of equal distance into focus. and everything nearer and further out of focus.

That’s how it works IRL, but in a game, if you want to change what you are looking at, you move the mouse so that the reticle is ‘on’ what you are looking at. Again, the engine would quite easily be able to adjust the focus (the sharpness and bluriness) of the entire screen accordingly.

Move the mouse so that the reticle (thing you aim with) is on the thing you want to look at. This is pretty standard in 3d games. And 3D games are quite accustomed to it.

Didn’t think you would, I just knew I was being a bit more argumentative than usual so I thought I’d apologise just in case. :slight_smile:

Stuff near focus.
I want to see real time 3D high definition porn :wink:

In fact most modern computer graphics actually model brick walls that include the grooves of the mortar, as these days it can be achieved relatively simply with minimal computing power. Unless the wall is completely peripheral or insignificant.

For example, in Spider-Man 2, the foreground buildings used in the clocktower sequence had every single detail individually modelled, including the mouldings and statues, and some of the interiors seen through the windows. Whereas the buildings about half a block away or more were much simpler, or were 2D images carefully spaced apart.

Toy Story 1 certainly did have very simplistic approaches to its texturing, but this was 1995. In The Incredibles, you’ll see that each brick and paving stone will be carefully built and placed as objects.

That’s just improvement in technology allowing for greater accessability to detail.

When I mentioned the brick wall, I actually meant to apply that example mostly to console-level video games. I wasn’t quite sure about more recent movies, so your information is quite interesting.

Am I correct though in my assumption that skin is still mostly texture based?

Yes, but there is sub0surface scattering, which tries to emulate the transparency of flesh showing the blood colour underneath. It’s hugely comples, though, so it’s not used much yet. I believe Gollum had it, but Yoda does not, for example.

sigh

subsurface
complex

Oh, nothing theoretical about it. It’s just a resource hog with minimal gain.

I can just imagine losing any interest in a game because I constantly have to shift my focus at whatever I’m looking at. It’d feel like having to reach up and use your fingers to move your eyeballs in the direction of what you want to look at… you’re making a player have to think about something that A: is automatic, and B: is already happening anyway.

I foresee nothing but confusion of the masses, sir.

My hypothesis is that gamers point at what grabs their attention anyway and it is more fluid than your analogy of moving the eyebals with your fingers. It might be one of those things that once implemented we (I) use it and think “ugh! this is nothing like I imagined it!” or it might be one of those things where someone like you things “well, actually this is quite easy to cope with. And it looks cool too”

Don’t you think the lambs photo would be more effective if it had the blurred background like the pigeon photo? I am thinking a game with that effect would impress visually (as the photo effect does). But you’re probably right in that it would make playing the game awkward.

As if first-person shooters weren’t twitchy enough! Lobsang, I’m holding you personally responsible for the ADD problem in our country.

:smiley:

Nah… the targeting reticle is for your gun. If you’re in a firefight and you’re trying to get away, moving the reticle away from the enemy towards the door you’re running to in order to grab the handle will… get you killed. I’m just not feelin’ it, dude.

It might be good in slower paced puzzle oriented games, of the Tomb Raider vein, rather than in a manic paced game like Halo.

Although something I WOULD like to see, and should be very easy to implement in games, is an automatic adjustment to light sensitivity. It’s something your eyes do anyway… if you’re running around in bright lights, and then enter a dark room, you won’t be able to see shit for several moments. Gradually, over a few minutes, you should start to adjust to the low light… then when you step into brightness again, you’re temporarily blinded.

I’ve always hated how games would put in a Dark Room puzzle that required you to use night vision or something. It only annoyed me, since I just kept thinking, “All I should have to do is wait several seconds for my eyes to adjust… stupid game!”

Splinter Cell: Pandora Tomorrow does this, at least in the versus mode. When the lights go out, you can’t see anything, but after a few moments, the image becomes brighter.

For several reasons I think that the depth of field idea is not practical and not useful.

It is not practical because when I’m playing a game I am generally scanning the screen with my eyes for things (enemies, objects, doors, whatever). I should not have to do this with a pointer as it would be very clumsy compared to what my eyes do, and counter productive if the pointer is also used for aiming a gun or something.

It is not useful because, as another poster has already stated, your eyes already achieve this when looking at a TV/computer screen. I think you may be getting confused between how a camera works, and how your eye/brain partnership works. A camera pointed at and correctly focussed on a TV screen will produce a photo of the screen which is in focus everywhere. When you look at one part of the screen with your eyes, the other parts are NOT in focus.

To test this try the following experiment, look at the word “file” on the top left of your computer screen, without subsequently moving your eyes, try and read this post. You can’t do it can you? Why? Because it is out of focus. My own experiments suggest that there is an area of around a 2cm radius around where your eyes are actually pointing that is in focus enough to be able to read. Obviously looking at a TV screen some distance away will have more of the screen in focus, however the effect remains.

I realise that the effect you get from your eyes not pointing at what you are concentrating on, is different from the effect you get from a camera and an object’s distance from it, but in the real world we see with our eyes and never quite get the camera effect anyway.

I think that before anyone has any attempt at practical depth of field graphics, developers have a lot of work to do just getting a useful “camera” on third person games. In addition to improving enemy AI.

Look, people. Depth of field IS implemented in the high end cards out today. Most games don’t take advantage of it yet, but you can get demos right of ATI’s website that show it (the Ruby demo does it)

It’s nice looking alright, too bad they didn’t fix the camera clipping right through the orange ball at the end though.

It’s not out of focus. Being “in focus” is a property of light passing through a lens. You can’t read it because only the center of your eye (the fovea) has a sufficient density of light-sensitive cells to resolve the text.

Simulator designers would love to only render the area in the center of your vision at the highest resolution. There’s been a lot of research in the area, but it’s really hard to track the eyes in a non-invasive way, and furthermore the eyes move so fast it’s hard to refresh the screen in the new area fast enough (it’s hard enough in simuation to respond to user input without delay–more delay than about 10-20 ms and the user doesn’t feel like it’s realistic anymore).

As for depth of field, the difficulty is largely that of determining where the user is looking. Some promising work was done measuring inter-puliary distance (as you look closer, your eyes “cross” slightly and it can be measured) and using that to track how near the eyes are focusing (because you really don’t need to know exactly where the eyes are focused, just whether it’s near or far), but the eye tracking is still too hard last I checked.

Fair enough, but the effect is still that of being out of focus.

If you want to simulate depth of field on a TV or computer screen, tracking how far away your eyes are looking will not work because you are always looking at the screen which is at a fixed distance. To do this you’d need to track where the eyes are looking rather than the near/far.

Speaking as a Professional Computer Game Programmer (ooooooh) I think you’re overinterested in this issue. It’s VERY easy (and many games do this) to have the artists build several different versions of a model, and use the low-poly-count ones when drawing a model that’s further from the camera. Given how easy that is to do, why would you need some super-fancy algorithm that automatically reduced poly count in realtime? That’s an example of a problem which has quite a good approximate solution, so why bother trying to come up with a “better” one?

Of course, you’ll still have models represented by polygons. They just won’t be RENDERED as polygons.

But I agree that at some point, that technology will probably be practical… perhaps a 1024x768 display will be rendered by a massively parallel 1024x768 array of processors, one processor per pixel :slight_smile:

  1. Because it’s there.
  2. Because it’s more flexible.
  3. Because you don’t have to pay artists to develop several models.