130
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 25 Aug 2023
130 points (98.5% liked)
Games
32579 readers
1466 users here now
Welcome to the largest gaming community on Lemmy! Discussion for all kinds of games. Video games, tabletop games, card games etc.
Weekly Threads:
Rules:
-
Submissions have to be related to games
-
No bigotry or harassment, be civil
-
No excessive self-promotion
-
Stay on-topic; no memes, funny videos, giveaways, reposts, or low-effort posts
-
Mark Spoilers and NSFW
-
No linking to piracy
More information about the community rules can be found here.
founded 1 year ago
MODERATORS
I hate that AMD copied the same terrible branding.
They're just trying to barely hang on to relevance, they're not interested in actually innovating.
AMD has features in yesteryears that it had before Nvidia, its just less people paid attention to them till it became a hot topic after nvidia implemented it.
An example was anti lag, which AMD and Intel implemented before Nvidia
https://www.pcgamesn.com/nvidia/geforce-driver-low-latency-integer-scaling
But people didnt care about it till ULL mode turned into Reflex.
AMD still holds onto Radeon Chill. Which basically keeps the gpu running slower when idling in game when not a lot is happening on the screen..the end result is lower power consumption when AFK, as well as reletivelly lower fan speeds/better acoustics because the gpu doesnt constantly work as hard.
yeah if you're severely gpu bottlenecked the difference is IMMEDIATELY OBVIOUS, especially in menus with custom cursors. (mouse smoothness while navigating menus is night and day difference), in-game it's barely noticeable until you start dropping to ~30fps, then again: a huge difference.
What makes the other options "theoretical"
I'm not saying reflex is bad and not used by esports pros. Its just the use of theoretical is not the best choice of word for the situation, as it does make a change, its just much harder to detect, similar to the difference between similar but not the same framerate on latency, or the experience of having refresh rates that are close to each other, especially on the high end as you stop getting into the realm of framerate input properties, but become bottlenecked by acreen characteristics (why oleds are better than traditional ips, but can be beat by high refresh rate ips/tn with BFI)
Regardless, the point is less on the tech, but the idea that AMD doesnt innovate. It does, but it takes longer for people to see t because they either choose not to use a specific feature, or are completely unaware of it, either because they dont use AMD, or they have a fixed channel on where they get their news.
Lets not forget over a decade ago, AMDs mantle was what brought Vulkan/DX12 performance to pc.
Because AMD gpu division is a much smaller division in an overall larger company. They physically cant push out as much features because of that. When they decide to make a drastic change to its hardware, its rarely seen till its considered old news. Take for example maxwell and pascal. You dont see a performance loss at the start because games would be designed for hardware at the time, in particular whatevers the most popular.
Maxwell and Pascal had a notible trait allowing it to have lower power consumption, the lack of a hardware scheduler as Nvidia moved the scheduler onto the driver. This allowed Nvidia to manually have more control of the gpu pipeline allowing for their gpus to handle smaller pipelines better, compared to AMD which had a hardware based one with multuple pipelines that needed an application to use properly to maximize its performance. It led to Maxwell/Pascal cards to have better performance.... Til it didnt, as devs started to thead games better, and what used to be a good change for power consumption evolved into a cpu overhead problem (something Nvidia still has to this day reletive to AMS). AMDs innovations tend to be more on the hardware side of things which is pretty hard to market because of it.
It was like AMDs marketing for Smart Access Memory (again a feature AMD got to first, and till this day, works slightly better on AMD systems than other ones). It was a feature that was hard to market because there isnt much of a wow factor to them, but is an innovation.
Which then comes with the question of price/perf. Its not that its a bad idea that DLSS is better than FSR, but when you factor in price, some price tiers start to get funny, especially in the low end.
For the LONGEST time, the RX 6600, which by default, was about 15% faster than the 3050, amd was significantly cheaper, still was outsold by the 3050. Using DLSS to cover the performance of another GPU does natively (meaning objectively better, no artifacts, no added latency) is when that argument of never buying a gpu without DLSS becomes weak, as the issue for some price brackets is what you could get at the same price or similar might be significantly better.
In terms of modern gpus, the 4060ti is the one card everyone for the most part, should avoid (unless your a business china that needs gpus for AI due to the U.S government limiting chip sales)
Sort of the same idea im RT performance too. Some people make it like AMD cant RT at all. Usually their performamce is a gen behind, so in situations like the 7900 xtx vs the 4080, could swing towards the 4080 for value, butnfor situations like the 7900xt, which was at some point, being sold for 700$, ots value, RT included was significantly better than the 4070ti as an overall package.
Which is what.im.sayong, the condition of course that the gpus are priced close enough (e.g 4060 vs 7600). But when theres a deficiency in a cards spec (e.g 8gb gpus) or a large discrepancy in price, it would favor the AMD usually .
Its why the 3050 was a terribly priced gpu for the longest time, and currently, the 4060ti is the butt of the joke, and someone shouldnt use those over the AMD in the said price range due to both performamce, and hardware deficiency(vram in the case of the cheaper 4060ti)