this post was submitted on 21 Aug 2025
4 points (75.0% liked)

LocalLLaMA

3561 readers
84 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

Yes this is a recipe for extremely slow inference: I'm running a 2013 Mac Pro with 128gb of ram. I'm not optimizing for speed, I'm optimizing for aesthetics and intelligence :)

Anyway, what model would you recommend? I'm looking for something general-purpose but with solid programming skills. Ideally obliterated as well, I'm running this locally I might as well have all the freedoms. Thanks for the tips!

you are viewing a single comment's thread
view the rest of the comments
[–] fredofredo@lemmy.world 2 points 2 hours ago (1 children)

What are you using it for? Coding? How big context do you need?

[–] trave@lemmy.sdf.org 1 points 2 hours ago (2 children)

some coding yeah but also want one that's just good 'general purpose' chat.

Not sure how much context... from what I've heard models kinda break down at super large context anyway? Though I'd love to have as large of a functional context as possible. I guess it's somewhat a tradeoff in ram usage as the context all gets loaded into memory?

[–] Womble@piefed.world 1 points 45 minutes ago

If you really dont care about speed (as in ask a question and come back half an hour later dont care) you could try a 3 bit quantization of qwen3 thinking thats at around 100GB so you could fit it in memory and still have enough leftover for the OS. But I'm not kidding about coming back an hour later for your response (or even longer), thats a very big model for a decade old computer.

[–] mierdabird@lemmy.dbzer0.com 1 points 1 hour ago

Qwen 3 coder is the current top dog for coding afaik, there's a 30b size and something bigger but I can't remember what because I have no hope of running it lol. But I think the larger models have up to a million token context window.