this post was submitted on 04 Apr 2025
478 points (97.0% liked)
Technology
68305 readers
4888 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Luckily deep seek uses less power than any locally run models. Can run it on almost any modernish machine
Meh I like some of the others on hugging face a bit more for coding and such. But its all the same at the end of the day. I do like what you are saying though!
Models + moderate power should be what we strive for. I'm hoping for a star trek ending where we live in a post scarcity world. Im planing on a post apocalypse haha.
Once ASIC chips come out (essentially a specific model on a chip) the amount of power we use will be dramatically less.
ASIC ai seems like a trouble some thing. Imagine ai powered hacker dongles. Wow.
Its an interesting field! I think the reason we have not gone there is the LLM specific models all have very different models/languages/etc... right now. So the algorithms that create them and use them need flexibility. GPUs are very flexible with what they can do with multiprocessing.
But in 5 years (or less) time, I can see a black box kinda system that can run 1000x+ speed that will make GPU LLMs obsolete. All the new GPU farm places that are popping up will have a rude awakening lol.