this post was submitted on 19 Sep 2025
135 points (90.9% liked)

Technology

75521 readers
2669 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Sasha@lemmy.blahaj.zone 1 points 6 days ago* (last edited 6 days ago) (1 children)

I'm pretty sure they touch on those points in the paper, they knew they were overloading it and were looking at how it handled that in particular. My understanding is that they're testing failure modes to try and probe the inner workings to some degree; they discuss the impact of filling up the context in the abstract, mention it's designed to stress test and are particularly interested in memory limits, so I'm pretty sure they've deliberately chosen to not cater to an LLMs ideal conditions. It's not really a real world use case of LLMs running a business (even if that's the framing given initially), it's not just a test to demonstrate capabilities, it's an experiment meant to break them in a simulated environment. The last line of the abstract kind highlights this, they're hoping to find flaws to improve the models generally.

Either way, I just meant to point out that they can absolutely just output junk as a failure mode.

[–] PhilipTheBucket@piefed.social 2 points 6 days ago (1 children)

Yeah, I get it. I don't think it is necessarily bad research or anything. I just feel like maybe it would have been good to go into it as two papers:

  1. Look at the funny LLM and how far off the rails it goes if you don't keep it stable and let it kind of "build on itself" over time iteratively and don't put the right boundaries on
  2. How should we actually wrap up an LLM into a sensible model so that it can pursue an "agent" type of task, what leads it off the rails and what doesn't, what are some various ideas to keep it grounded and which ones work and don't work

And yeah obviously they can get confused or output counterfactuals or nonsense as a failure mode, what I meant to say was just that they don't really do that as a response to an overload / "DDOS" situation specifically. They might do it as a result of too much context or a badly set up framework around them sure.

[–] Sasha@lemmy.blahaj.zone 1 points 6 days ago

I meant they're specifically not going for that though. The experiment isn't about improving the environment itself, it's about improving the LLM. Otherwise they'd have spent the paper evaluating the effects of different environments and not different LLMs.