130
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 25 Aug 2023
130 points (98.5% liked)
Games
32579 readers
1466 users here now
Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.
Weekly Threads:
Rules:
-
Submissions have to be related to games
-
No bigotry or harassment, be civil
-
No excessive self-promotion
-
Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts
-
Mark Spoilers and NSFW
-
No linking to piracy
More information about the community rules can be found here.
founded 1 year ago
MODERATORS
because I think the post assumes that the GPU is always using all of its resources during computation when it isn't. There's a reason why benchmarks can make a GPU hotter than a game can, as well as the fact that not all games pin the gpu performance at 100%. If a GPU is not pinned at 100%, there is a bottleneck in the presentation chain somewhere. (which means unused resources on the GPU)
I still think it's a matter of waiting for the results to show up later. AMD for RDNA3 does have an AI engine on it, and the gains it might have in FSR3 might be different in the same way XeSS does with branching logic. Too early to tell given that all the test suite tests are RDNA3, and that it doesn't officially launch til 2 weeks from now.
You aren't going to use these features on extremely old GPUs anyways. Most newer GPUs will have spare shader compute capacity that can be used for this purpose.
Also, all performance is based on compromise. It is often better to render at a lower resolution with all of the rendering features turned on, then use upscaling & frame generation to get back to the same resolution and FPS, than it is to render natively at the intended resolution and FPS. This is often a better use of existing resources even if you don't have extra power to spare.