22
submitted 8 months ago by gunpachi to c/localllama@sh.itjust.works

I have an rx 6600 and 16gb of ram and an i5 10400f

I am using oobabooga web-ui and I happened to have a gguf file of LLama2-13B-Tiefighter.Q4_K_S .

But it always says that the connection errored out when I load the model.

Anyway, please suggest any good model that I can get started with.

you are viewing a single comment's thread
view the rest of the comments
[-] neurogenesis@lemmy.dbzer0.com 3 points 8 months ago

I'd suggest checking out WolframRavenwolf on raddit, he does regular LLM tests.

I'm looking at Beyonder 4x7B, Mistral Instruct 2x7B, Laser Dolphin 2x7B, and previously used Una Cybertron.

[-] gunpachi 3 points 8 months ago

Hey thanks ! I'll check these out.

this post was submitted on 25 Jan 2024
22 points (100.0% liked)

LocalLLaMA

2209 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS