this post was submitted on 17 Apr 2025
25 points (90.3% liked)

LocalLLaMA

2884 readers
6 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] Smokeydope@lemmy.world 2 points 4 days ago* (last edited 4 days ago)

You are correct in your understanding. However the last part of your comment needs a big asterisk. Its important to consider quantization.

The full f16 deepseek r1 gguf from unsloth requires 1.34tb of ram. Good luck getting the ram sticks and channels for that.

The q4_km mid range quant is 404gb which would theoretically fit inside 512gb of ram with leftover room for context.

512gb of ram is still a lot, theoretical you could run a lower quant of r1 with 256gb of ram. Not super desirable but totally doable.