this post was submitted on 27 Jan 2024
23 points (96.0% liked)
LocalLLaMA
2884 readers
6 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
To do general purpose GPU calculations on AMD hardware you need a GPU that is supported by ROCm (AMD's equivalent to CUDA). Most of the gaming GPUs are not.
There is a list here but be aware that that is for the latest rocm version, some tools might still use older versions with different supported devices.
https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html#supported-gpus
Has that changed recently? I've ran ROCm successfully on an RX6800. I seem to recall that was supported, the host OS (Arch) was not.
When I tried it maybe a year or so ago there were four supported chipset in that version (5.4.2 I think) of rocm but I don't remember which card models those were since they were only specified in that internal chip name. Mine wasn't supported at the time (5700XT)
No, GFX1030 is still supported.
This link is misleading. For example, the Radeon RX6800 IS supported because it is the same chip as one of the Radeon Pros. GFX1030. Many others are too…though support does not go very far back.
Llama.cpp supports OpenCL as well and performs better than rocm in my limited experience. That should work on basically any GPU.