Nvidia just released its latest video card line (40xx) and lots of performance tests are coming out. It is a lot to wade through.
My big question is the 40xx series seems to rely heavily on DLSS 3.0 for its stellar numbers.
But, near as I can tell, the video card is just making up some frames and inserting them into the video stream so, there are more frames and the numbers look faster.
But that seems a fudge. They have not improved graphics quality. They have just artificially goosed framerates. Or maybe I am wrong and those extra frames totally make the experience better!
I really do not know. Hence the question. Can anyone shed light on what’s happening here?
All of these supersampling techniques (DLSS, FSR, XeSS) work by rendering an image at a lower resolution then using AI to blow it up to a higher resolution. This is less intensive than rendering it at the native resolution but still results in a pretty good images (how good depends on the technique, game and settings). So if your game is struggling to run Space Dragons III at 1440p, you could turn on DLSS and the GPU will actually render the images at 1080 and use the supersampling AI to extrapolate the missing information and present you with a 1440p image.
The intent isn’t to improve graphics quality since quality will always be lower than native. The intent is to considerably improve framerates at a minimal hit to quality. This can help someone maintain a playable framerate at 4K resolution with a higher end card or maintain a playable 1080p resolution in a game like Cyberpunk 2077 with an RTX 3050. The image won’t be quite as clear as it would be natively but, if you’re doubling your framerate (and going from sub-30 or sub-60 to over those bars) then the hit in image quality might still be well worth it.
The person is not really getting the higher resolution. They are getting a best guess. Which also makes me wonder how well ray tracing works. If they are fudging the resolution doesn’t ray tracing, their claim to fame, also suffer?
Fair enough. As long as I can find data for both supersampling on and off, it’s fine by me. It’s going to continue to be part of the GPU tech going forward – everyone is including it – and trying to get the same frames without it on current tech would create impractical amounts of heat and power consumption. It’s a “best guess” but a best guess can still be extremely accurate (especially then the one ‘guessing’ is the one rendering the initial image) and the hits on higher quality modes are often pretty negligible for the extra frame rate. That said, I rarely use it just because I have a 3080Ti and game at 1440p so my card can get adequate enough frames for me via raw performance. If I had a 4K monitor, I’d probably be using it.
Ray tracing is a different thing entirely. Calculating how light moves from source to object then reflects off to illuminate another object is the same regardless of the screen rendering size. The supersampled result won’t be “true” but that’s just the supersampling limitations – the way light works will be unaffected. Ray tracing uses a lot of GPU power though which is why it’s usually used in conjunction with supersampling to maintain good/playable frame rates.
But this is not supersampling (which is used for anti-aliasing). IIRC supersampling renders a scene at a higher resolution and then displays a lower resolution.
That’s not what is happening here.
DLSS boosts framerate by inventing a frame that doesn’t otherwise exist and inserting it.
DLSS is literally Deep Learning Super Sampling
FSR is FidelityFX Super Resolution
XeSS is Xe-Powered Super Sampling
To grab a random tech explanation:
FSR is AMD’s answer to Nvidia’s Deep Learning Super Sampling (DLSS). Like DLSS, FSR is a supersampling feature that makes a game look like it’s rendering at a higher resolution than it really is. So, the engine may render the game at 1080p, and then FSR steps in to fill in the missing pixels to make it look like a 1440p output.
I know the same term has been used to describe taking unused GPU overhead to render an image larger than the screen resolution then push reduce it to fit. I don’t know why both techniques share the same term but “supersampling” is the accepted term for what modern cards are doing to get higher frame rates.
For those so inclined here is a 35 minute deep dive into DLSS (it was posted only a couple hours ago so after I made this thread).
It seems DLSS works well if your card can render the scene around 100-120FPS on its own (no DLSS). DLSS will then boost the framerate past that with no noticeable hit to fidelity.
But, ISTM, if you are rendering at 100+ FPS already then no need for DLSS.
It still is sharper and more detailed, which is what most people mean by “higher resolution.” It may be a guess, but that guess is quite good, nearly as good as running native on more powerful hardware. Heck, sometimes people say it looks better.
And raytracing is exactly where this sort of thing shines most, because raytracing is so computationally intensive. Combine that with how freaking expensive the more cards are getting, and people like having a way to get 95% of the way there without paying so much.
It works well at much more than that. It just works best in those scenarios.
And over 100 isn’t enough for a 240hz display. And those higher framerates do in fact allow people to play better–though of course there are diminishing returns. I know Linus Tech Tips did a video testing how framerates affect reaction time of real players.
If it were up to me, 1080p 60hz would be enough, but it clearly isn’t for the PC gaming enthusiasts. The problem is, the display tech has outstripped the ability of the GPUs to power them at their fullest—at least, when using traditional rendering methods.
And then you throw ray tracing on top of that. You need some sort of “fudge” for this all to work. And, bonus, we get the tech also upscale images and movies and stuff, too.
I watch a lot of lower-tech Youtube, not “retro” but stuff using 2000-series cards or an RX 6600 or RTX 3050, etc and DLSS (and its kin) are very helpful in those games. It’s literally the difference between “I can’t play Cyberpunk 2077 at 1080p because I get 14fps in the city” and “I can play CP2077 at 45-50fps”. And it still looks good and certainly better than playing it at 780p would. Most of the “cost” boils down to things like “Well, you can see these tail lights and the bumper aren’t quite as crisp here and those billboards are a little blurrier” that you might notice in a screenshot or review but likely won’t ever notice while actually running from the cops in-game.
Really though, the answer is to watch some comparisons and make your own choices based on the display you plan to use.
I think that at least part of the issue in this thread was confusion over the actual topic. You seem to be referring exclusively to DLSS 3.0 which does indeed draw in additional frames to try to improve GPU performance even when the CPU is bottlenecking it. I was referring to DLSS more in general and DLSS 2.0 more specifically since that’s what’s actually being used today for the vast majority of people (with the 4090 having just launched and few games having DLSS 3.0 support). For people who don’t want to watch a 35min video, this article does a pretty good job of breaking it down:
The TL;DR is that DLSS has a niche application but isn’t as useful or transformative as DLSS 2.0 . And, if you’re solely counting frames, DLSS 3.0 can technically provide more frames but with a significantly worse quality trade-off (so you wouldn’t really want to use it anyway).
I would say to compare base FPS when looking at cards though I still don’t think that providing DLSS number is cheating, providing the chart is clear about it. Apparently Cyberpunk 2077 is getting an graphic options update that includes a separate toggles for DLSS Frame Generation and DLSS Supersampling so you can enjoy the “Low Res to High Res” benefits without any of the drawbacks of the frame generation aspects of DLSS 3.0 if you want. Hopefully options like that are the norm in the future as more games add DLSS 3.0 support.
Don’t think this is limited to super-high frame rates, either. Microsoft Flight Simulator might only be running at 20-30 fps at times due to CPU bottlenecks. Doubling this is a significant advantage, especially since the game doesn’t require super low latency.
All graphics is a fake. The question is just how well it can be faked. The information in a low-resolution, low-frequency sequence of images is plenty to reconstruct a higher-resolution version of it. Is the reconstructed data “real”? It’s an irrelevant question, but in any case one could ask if most of the information in a natively high-resolution image is uselessly redundant or not. If it can be reconstructed accurately enough, why bother creating it in the first place?
There’s a reason why cell phone images today often look better than DSLR images from not long ago. The sensors are still terrible in comparison, but the software has improved massively, and an image reconstructed from bad/noisy source data can very often look better than “good” but minimally-processed data.
I saw a claim on Reddit that someone got DLSS 3 working on a 2060. My understanding was that DLSS 3 included additional hardware on the card, and if that’s the case of course you can’t software unlock it on an older card. Still, something to keep an eye out for.