this post was submitted on 13 Mar 2025
1886 points (99.7% liked)

People Twitter

6835 readers
676 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a pic of the tweet or similar. No direct links to the tweet.
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
(page 3) 50 comments
sorted by: hot top controversial new old
[–] kane@femboys.biz 2 points 1 month ago

Exactly this is why I have a love/hate relationship with just about any LLM.

I love it most for generating code samples (small enough that I can manually check them, not entire files/projects) and re-writing existing text, again small enough to verify everything. Common theme being that I have to re-read its output a few times, to make 100% sure it hasn't made some random mistake.

I'm not entirely sure we're going to resolve this without additional technology, outside of 'the LLM'-itself.

[–] OsrsNeedsF2P@lemmy.ml -3 points 1 month ago* (last edited 1 month ago) (4 children)

Oof let's see, what am I an expert in? Probably system design - I work at (insert big tech) and run a system design club there every Friday. I use ChatGPT to bounce ideas and find holes in my design planning before each session.

Does it make mistakes? Not really? it has a hard time getting creative with nuanced examples (i.e. if you ask it to "give practical examples where the time/accuracy tradeoff in Flink is important" it can't come up with more than 1 or 2 truly distinct examples) but it's never wrong.

The only times it's blatantly wrong is when it hallucinates due to lack of context (or oversaturated context). But you can kind of tell something doesn't make sense and prod followups.

Tl;dr funny meme, would be funnier if true

load more comments (4 replies)
load more comments
view more: ‹ prev next ›