this post was submitted on 20 Aug 2025
320 points (99.1% liked)
Microblog Memes
8968 readers
1258 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Jesus Christ π€¦ββοΈ
MS puts out an "LLMs suck at this" and y'all lose your mind.
i mean, if it sucks at this, why put it in lol
(rhetorical question, itβs to please investors, i know)
One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn't overflow and become incoherent immediately [1], it is extremely useful.
But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake
Some of y'all would've lit the fire under Jan Hus if you lived in the 15th century
[1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it's not really much of a concern.
Because it's good at other things like creating tables and fully utilizing all features that users typically aren't informed or practice on. Being able to describe a table and how you want to layout data for the best results is helpful.