this post was submitted on 06 Apr 2025
5 points (72.7% liked)
Hacker News
1210 readers
741 users here now
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
founded 7 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't know much about cybersecurity but from what I understand about how LLM models work, there was always going to be a limit to what they can actually do. They have no understanding; they're just giant probability engines, so the 'hallucinations' that happen aren't something solvable, they are inherent in the design of the models. And it's only going to get worse, as training data is going to be more and more difficult to find without being poisoned by current llm output.