this post was submitted on 05 Mar 2025
10 points (91.7% liked)
LocalLLaMA
2739 readers
45 users here now
Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Depends on what model you want to run?
of course, I haven't looked at models >9B for now. So I have to decide if I want to run larger models quickly or even larger models quickly-but-not-as-quick-as-on-a- Mac-Studio.
Or I could just spend the money on API credits :D
Use API credits. 64GB can barely run a 70B model. I have a MacBook Pro M3 Max with 128GB and can run those and even slightly bigger models. But results are underwhelming. Not bought for LLM only, but if I would have, I would be disappointed.