this post was submitted on 13 Sep 2025
10 points (91.7% liked)

Buildapc

5019 readers
20 users here now

founded 2 years ago
MODERATORS
 

Hey all. I'm exploring the idea of building a desktop PC optimized for running LLMs locally. My two primary use cases are I'd like to be able to add local documents and then talk to my files; I'd also like to use it as a coding assistant. Lower priority use case but something I'm tangentially interested in is image generation using stable diffusion. I don't plan to do any model training, I'll leave that to the pros.

One of the decisions I'm currently working through is whether to create this as a desktop workstation (like a PC build) or as more of a homelab environment (like a "local cloud"). On one hand, I believe a desktop workstation would be easier for me to wrap my head around b/c I've built several gaming PCs, whereas I have no homelab or self hosting experience beyond running a local-only Jellyfin instance on an old laptop. On the other hand, I like the thought a separate, atomic AI hub as like a local cloud if you will, similar to how I think of the NAS as a separate thing. What I like about the separate local cloud thinking is In both cases, the AI hub and/or the NAS can be accessed from any device.

I would like to strike the right balance between budget, power efficiency, and speed. I don't need to set any land speed records, but I would also like to avoid waiting several minutes for responses. I can probably spend up to $2,000 on this project, and I'm located in the US.

My questions for those the community who've gone before me:

  • Has anyone build built a desktop workstation and then wished they built it as a server?
  • Is there actually much of a difference between a desktop workstation versus a homelab environment when it comes to hardware for AI tasks?
  • What other questions I should be asking myself to decide which way to go?

Thanks!

top 5 comments
sorted by: hot top controversial new old
[–] mierdabird@lemmy.dbzer0.com 3 points 1 day ago

I initially installed Ollama/OpenWebUI in my HP G4 Mini but it's got no GPU obviously so with 16GB ram I could run 7b models but only 2 or 3 tokens/sec.
It definitely made me regret not buying a bigger case that could accomodate a GPU, but I ended up installing the same Ollama/OpenWebui pair on my windows desktop with a 3060 12gb and it runs great - 14b models at 15+ tokens/sec.
Even better, I figured out that my reverse proxy on the server is capable of redirecting to other addresses in my network so now I just have a dedicated subdomain URL for my desktop instance. It's OpenWebUI is now just as accessible remotely as my server's.

[–] Aelyra@lemmy.ml 4 points 1 day ago (1 children)

I’m probably stating the obvious, but you can totally access your desktop PC remotely. Tools like SSH, TeamViewer, Moonlight, and others work pretty well for that. The biggest factor by far is whether you plan to game on it too. If so, a gaming PC usually makes way more sense than server hardware.

Otherwise, it really just comes down to raw specs and what's actually available to you locally. VRAM is the most important factor. You'll want plenty of it in your machine. Fast DDR5 RAM is nice to have too.

My personal take? When in doubt, go for the desktop PC. The setup’s easier, upgrades are simpler, and you’ll often find better deals especially if you’re buying used.

[–] yo_scottie_oh@lemmy.ml 2 points 1 day ago* (last edited 1 day ago)

Okay, thanks for chiming in. Because I also game on PC, I think I'll scrap the AI server idea and stick with a desktop workstation that will do both gaming and AI tasks.

ETA good point about being able to remote into the desktop workstation from other devices.

[–] BananaTrifleViolin@lemmy.world 1 points 1 day ago (1 children)

I don't have experience in this in terms of setting up a server vs desktop, but I have played with AI on a desktop and in a VM.

I'd suggest trying the tech in both a desktop and a VM to see which works for you. You could set up a low powered version of your server idea, see if it works in principle and see if it gives you what you want?

I do have a non AI Raspberry pi set up and there is a lot.of benefit to an always on always accessible device so I can see the attraction. But for AI are you going to use it enough to justify such a set up? Testing it with a VM might help answer that at least in part.

I've actually gotten a second graphics card recently to give a VM it's own hardware - I'm just tinkering and won't depend on it but a kernel VM on a desktop is a viable way to do this stuff too.

With a desktop you do have flexibility if you want to use it as an actual desktop, but equally with an always on server stack you do have a lot of options beyond AI to use it. For example a media server, a next cloud set up, syncthing mirror etc.

If you have $2000 to burn I guess either will work. If you don't I'd probably pragmatically go down the desktop route myself so I can make other users of the kit or have flexibility with what I can do with it (including selling it)

[–] yo_scottie_oh@lemmy.ml 1 points 1 day ago* (last edited 1 day ago)

Gotcha, so for AI and gaming (dual purpose), I may as well incorporate both into one rig is what you're suggesting, right?