What was the first video game with "accurate" mirrors?

I grew up in Los Alamos, and so we got access to one of the first screenings of serious ray tracing animation back in 1983.

As I recall it took many (hundreds?) of hours from the fastest supercomputer of the time (Cray 1) in order to produce this 1 minute movie. At the time it knocked our socks off.

Excellent! Yes, that’s a good example of traditional ray tracing, which was of course all that was computationally possible back then. It does a good job with basic reflection and refraction. What it doesn’t achieve are caustics–the focusing of light as through a lens. The transparent sphere just has a black shadow, for instance. And there is no global illumination, so all of the shadows are black, even though they should be getting some reflected light from nearby objects that are illuminated. Still, that’s a cool early animation. A lot of progress has been made since then.

Yes, I know. I’ve been playing with CGI programs off and on since the early 1990s with POVRay and 3DS Max for DOS. Most of my time was with Maya, though, from version 2.0 up to 8.5. (I make no claim to great skill with software or inspiration with the creations, though.)

POV-Ray was a miserable experience on a 386SX-16 (no floating-point unit). Nevertheless, it was a great inspiration in writing my own rendering software.

The technical discussion is fascinating, but for the benefit of anyone who wants a simple answer, here’s an overview article summarizing how video games simulate mirror effects. The TLDR is that it seems really simple to us (“just reflect what you see!”) but in a simulated environment a reflection is really hard.

(Edit to add: I realize the OP is not asking “how are reflections made” but this is useful context for understanding why it has taken so long and why they aren’t trivially added to every game.)

Cube maps are a nice technique, useful for curved surfaces where the result doesn’t have to be too accurate, just plausible looking. As your link mentions, it involves rendering to each of the 6 faces of a cube, and using that to capture the surrounding scene from every angle. Here’s an example from a past project where I colored each face differently, as well as added some text:

If you look carefully, you can see that some of the faces aren’t quite correct. As they say, the first principle in computer graphics is to always have an even number of sign errors. I failed that principle here.

At any rate, with fixed transformation matrices and the normal coloring, the scene looks like this:

It could almost pass for ray tracing if you aren’t looking too closely.

With an additional trick, one can do even better:

Look closely, and you can see recursive reflections–i.e., the reflection of each ball within its own reflection, and so on, several levels deep. The trick is to use the previous frame’s cubemap as the starting point. After several frames, you can see the level of reflection get deeper and deeper. There is some time lag involved, but you don’t really notice it.

Test Drive in 1987 had a working rearview mirror that had the same level of graphics as the main view, which was that weird color scheme VGA .

Maniac Mansion and Leisure Suit Larry had working mirrors.

Getting the same results as reality is simulating it. I think what you’re trying to describe is varying levels of depth of a simulation.

As you say, light begins at a source and ends at a detector. But the mathematics of light propagation is typically reversible, so using that to find a solution is simulation. And as you say, there’s limits to it, so it’s not necessarily the best means to find other solutions.

But ray tracing is only an approximation as well. Photons interact with matter through each medium they pass through. To simulate more accurately, we need to take into account the atomic, molecular, and solid-state electronic energy states. A refractive index or reflective coefficient are high-level approximations, which ignore the underlying physics.

Of course, the goal of art generally isn’t to simulate anything, but to achieve the artists’ desires.

I have some vague recollection that the newer GPUs can shortcut some of the ray-tracing with a machine learning algorithm that approximates the in-between frames, this allowing a higher frame rate without having to trace each one. Does this ring a bell for anyone? There’s a good chance I’m speaking nonsense.

RE: Duke Nukem

First game I thought about in this context. The only thing I remember is that there were bathrooms on some levels with mirrors, where Duke Nukem could push a button that elicited a “Damn, I’m looking good” remark. There never were enemies or tasks to solve in the bathrooms, they only served his (or the player’s?) ego.

I disagree completely. Consider planetary motion: the theory of epicycles vs. heliocentric motion. One is as correct model of reality; the other is not. But they are both trying to recreate the same motions. It’s not even about accuracy, really; initially, there were times when epicycles gave better results, before elliptic motion was known about. And it’s easy to show that the epicycles can recreate arbitrary motion, including General Relativity and any other effects, but the equations just get absurdly complicated.

It’s the same behavior seen in other areas. When you have the right model of reality, the complexity of the corrections scale much more gradually. Small corrections have (relatively) simple explanations. And those explanations cover a bunch of other cases at the same time. When you have a bad model, each new correction introduces vastly more complexity, and only ever corrects that one thing, not a bunch of other ones.

Simulating how photons bounce around, even while neglecting diffraction and polarization and other effects, is still pretty close to how reality works. But traditional ray tracing is not. It’s more like the ancient Greek theory of eyesight where your eyes shot out beams that somehow sampled the world.

That said, traditional ray tracing is a lot closer than a bunch of other techniques, which often don’t try to model reality at all.

Not at the scale that CGI works at. Consider the simplest type of lighting: a diffuse surface. It’s essentially defined as a surface where an incoming photon will bounce away in a random direction. That’s not reversible.

What’s happening is that there’s a microstructure that’s smaller than our ability to model. You can imagine a collection of perfect mirrors oriented in random directions. If a photon hits one, it’ll bounce off in a perfect reflection. But these mirrors are much smaller than a single pixel, so we don’t actually know the orientation of the mirror we hit. It’s effectively random. But since the microstructure is so small, we can approximate it with a distribution of some kind.

The more general way to model this is what’s called a BRDF, or bidirectional reflection distribution function. It basically says, for this direction of incoming light, what are the odds (i.e., what is the proportion of light) that bounces in some other direction? That’s basically a 4-dimensional function, since incoming and outgoing directions are each 2D (points on a hemisphere). There are more advanced versions, though, that take frequency and other things into account.

You’re mixing up a couple of things, but it’s understandable and not complete nonsense.

The latest NVIDIA GPUs use machine learning for frame interpolation. They look at the before and after frames, and also some auxiliary data like motion vectors, and compute an intermediate frame. This doesn’t have anything to do with ray tracing, though since ray tracing is more work, turning it on might reduce your performance below what’s acceptable, and you might want to use the frame interpolation to bring it back to a reasonable level.

NVIDIA GPUs also use machine learning for noise reduction in ray tracing. Since GPUs aren’t infinitely fast, they can’t just shoot hundreds of rays per pixel, but without that you end up with a very noisy looking image. One example:

So NVIDIA has developed denoising filters using machine learning to get a smooth image even with noisy inputs. It’s not just an ordinary smoothing filter, since it can take the (known) geometry into account.

Our disagreement is semantic. I’d say correctness can only be evaluated by accuracy. And there’s no accuracy threshold separating a simulation from a non-simulation. The phrase “correct model of reality” is hubric and non-scientific.

The rest of the post (including what I didn’t quote) is addressing the utility of different models. I have no disagreements with that. Utility is a strong differentiator between models.

I play a lot of simulation games, and even today, mirrors are often not fully functional.

  • House flipper will show reflections of stationary objects but not the user character. So you feel like you’re a vampire.
  • Driving games, the better ones, will have functioning rear view mirrors. This includes flight sims like DCS and even Microsoft Combat Flight Sim from about 1999 / 2000 (ETA: actually it did not; misremembered).
  • For sure, Duke Nukem was memorable in having a bathroom (or two) with a functioning mirror.
  • The Sims 4 definitely has functioning mirrors, which actually improve the game viewability, in that everything is reflected properly, even if the mirror is placed in a strange location.

I think that’s a bit different than what the OP’s looking for; in that case, they’re just rendering what’s behind you as part of the scene in front of you, and it’s not really a mirror effect.

As far as I recall, that sort of real-time mirror stuff didn’t really become common until maybe the very late 1990s/early 2000s with the advent of dedicated GPUs like the 3dfx Voodoo 2 and Nvidia Riva TNT2. Those finally gave PCs the ability to offload those graphic calculations to the GPU and thereby giving the system enough oomph to actually do mirrors in a timely fashion.

One additional complication is that in first-person games, if you can see other parts of your body, the model for those is usually not remotely accurate when seen from any vantage point other than your eyes (because, of course, an accurate model “looks wrong”). So you need two different models, one for the viewpoint from your eyes, and one for the view in the mirror.

It’s true that a lot of games that are locked to a 1st person perspective have comically deformed player models, but that’s just because there’s no need to design a model that never appears. It’s not an inherent limitation with 1st person perspective games, and indeed, there are plenty of games that allow the player to switch between 1st and 3rd person views, without needing to create separate models for each perspective.

To me that just seems like the obvious way to do it. Why go with ray tracing when you can just have a second camera object and convert the view to a texture on the mirror? Place a second camera behind the mirror, with the mirror being backless (and anything else between the mirror and camera being culled out/invisible). Then project it as a texture on the mirror (possibly with decals on top).

Ray tracing makes more sense for shiny objects, not full on mirrors.

A texture on the flat mirror surface might be good enough for a small rear-view mirror, especially if you’re always seeing it from the driver’s seat. But an image in a mirror is three-dimensional, and so a flat image will be immediately obvious if you can move around relative to it.