this post was submitted on 02 Jul 2025
267 points (94.4% liked)

Fuck AI

3341 readers
1224 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] theunknownmuncher@lemmy.world 18 points 2 days ago* (last edited 2 days ago) (7 children)

So firstly, shut up, nerd. You’ve clearly never worked a corporate job and don’t understand the compulsion to in-house nothing.

I WISH. Would have saved so much headache from rolling our own framework for the millionth time instead of using standard solution that already exists... Tell me you've never worked in corporate software without telling me you've never worked in corporate software.

Secondly, approximately 0.0% of chatbot users will ever run an LLM at home.

Wut. Individual models on huggingface have monthly downloads in the hundreds of thousands... Just Deepseek R1 was downloaded 550,000 times in the last month alone. Gemma 3n was downloaded over 120,000 times in the last 7 days.

Thirdly, it’s slow as hell even on a top-end home Nvidia card — and the data centre cards are expensive.

They literally run well on mobile phones and laptops lol.

The author is desperately grasping at straws at best, intentionally making up false claims at worst... either way, they aren't qualified to write on this subject.

[–] Blaster_M@lemmy.world 4 points 2 days ago (1 children)

LocalLMs run pretty good on a 12GB RTX 2060. They're pretty cheap, if a bit rare now.

[–] Valmond@lemmy.world -1 points 2 days ago (1 children)

So 12GB is what you need?

Asking because my 4GB card clearly doesn't cut it 🙍🏼‍♀️

[–] Blaster_M@lemmy.world 0 points 2 days ago (1 children)

4GB card can run smol models, bigger ones require an nvidia and lots of system RAM, and performance will be proportionally worse by VRAM / DRAM usage balance.

[–] theunknownmuncher@lemmy.world 3 points 2 days ago

require an nvidia

Big models work great on macbooks or AMD GPUs or AMD APUs with unified memory

load more comments (5 replies)