40
submitted 7 months ago by ylai@lemmy.ml to c/localllama@sh.itjust.works
you are viewing a single comment's thread
view the rest of the comments
[-] fhein@lemmy.world 1 points 6 months ago

GGUF q2_K works quite well IMO, I've run it with 12GB vram + 32GB ram

this post was submitted on 01 Feb 2024
40 points (95.5% liked)

LocalLLaMA

2207 readers
4 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS