this post was submitted on 19 Sep 2025
135 points (90.9% liked)

Technology

75521 readers
3310 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] PhilipTheBucket@piefed.social 2 points 6 days ago (1 children)

Yeah, I get it. I don't think it is necessarily bad research or anything. I just feel like maybe it would have been good to go into it as two papers:

  1. Look at the funny LLM and how far off the rails it goes if you don't keep it stable and let it kind of "build on itself" over time iteratively and don't put the right boundaries on
  2. How should we actually wrap up an LLM into a sensible model so that it can pursue an "agent" type of task, what leads it off the rails and what doesn't, what are some various ideas to keep it grounded and which ones work and don't work

And yeah obviously they can get confused or output counterfactuals or nonsense as a failure mode, what I meant to say was just that they don't really do that as a response to an overload / "DDOS" situation specifically. They might do it as a result of too much context or a badly set up framework around them sure.

[–] Sasha@lemmy.blahaj.zone 1 points 6 days ago

I meant they're specifically not going for that though. The experiment isn't about improving the environment itself, it's about improving the LLM. Otherwise they'd have spent the paper evaluating the effects of different environments and not different LLMs.