87
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 29 Sep 2023
87 points (100.0% liked)
Gaming
30500 readers
146 users here now
From video gaming to card games and stuff in between, if it's gaming you can probably discuss it here!
Please Note: Gaming memes are permitted to be posted on Meme Mondays, but will otherwise be removed in an effort to allow other discussions to take place.
See also Gaming's sister community Tabletop Gaming.
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
They are for providing special hardware for Neural Network inference (most likely convolutional). Meaning they provide a bunch of matrix multiplication capabilities and other operations that are required for executing a neural network.
Look at this page for more info : https://www.nvidia.com/en-us/data-center/tensor-cores/
They can be leveraged for generative AI needs. And I bet that's how Nvidia provides the feature of automatic upscaling - it's not the game that does it, it's literally the graphic cards that does it. Leveraging AI of video games (like using the core to generate text like ChatGPT) is another matter - you want to have a game that works on all platforms even those that do not have such cores. Having code that says "if it has such cores execute that code on them. Otherwise execute it on CPU" is possible but imo that is more the domain of the computational libraries or the game engine - not the game developer (unless that developer develops its own engine)
But my point is that it's not as simple as "just have each core implement an AI for my game". These cores are just accelerators of matrix multiplication operations. Which are themselves used in generative AI. They need to be leveraged within the game dev software ecosystem before the game dev can use those features.
By AI here I mean what is traditionally meant by "game AI", pathfinding, decisionmaking, co-ordination, etc. There is a counterstrike bot which uses neural nets (CPU), and it's been around for decades now. It is trained like normal bots are trained. You can train an AI in a game and then have the AI as NPCs, enemies, etc.
We should use the AI cores to do AI.
You could imagine training one AI for each game AI problem like pathfinding but what is see the benefit over just using classical algorithms?
Can DLSS and XeSS be used for something else than upscaling?
Utilisation. A CPU isn't really built for deep AI code, so it can't really do realistic AI given the frame budget of doing other things. This is famously why games have bad AI. Training AI via AI algorithms could make the NPCs more realistic or smarter, and you could do this within reasonable frame budgets.
I see. You want to offload AI-specific computations to the Nvidia AI cores. Not a bad idea, although it does mean that hardware that do not have them will have more CPU load so perhaps the AI will have to be tuned down based on the hardware they run on..
Yes, similar to Raytracing which still needs a traditional pipeline, with AI you will have "enhanced" (Neural Nets) and "basic" (if statements).