this post was submitted on 05 Mar 2025
10 points (91.7% liked)

LocalLLaMA

2739 readers
51 users here now

Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

Thinking about a new Mac, my MPB M1 2020 16 GB can only handle about 8B models and is slow.

Since I looked it up I might as well shared the LLM-related specs:
Memory bandwidth
M4 Pro (Mac Mini): 273GB/s M4 Max (Mac Studio): 410 GB/s

Cores cpu / gpu
M4 pro 14 / 20
M4 Max 16 / 40

Cores & memory bandwidth is of course important, but with the Mini I could have 64 GB ram instead of 36 (within my budget that is fixed for tax reasons).

Feels like the Mini with more memory would be better. What do you think?

you are viewing a single comment's thread
view the rest of the comments
[–] Oskar@piefed.social 1 points 2 weeks ago (1 children)

Interesting, lots of "bang for the buck". I'll check it out

[–] papertowels@mander.xyz 1 points 2 weeks ago

Yup! They even had a demo clustering 5 of them to run deep seek proper