this post was submitted on 25 May 2025
-9 points (26.3% liked)
Nowhere Else To Share
787 readers
75 users here now
I didn't know which community to post something in and so here we are.
Please comment if you know a more appropriate community for a post.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well, yeah. It’s an LLM built on a lot of scraping social media.
Now that's an interesting idea.
We ask questions. Get answers by scraping social media.. The answers inform social media. Informing further answers to questions. Etc. A spinning wheel.
It's deeply incestuous. In a hundred generations what monsters may spawn?
Its not "an interesting idea", it's exactly how they work.
Ideas about how things work can be interesting too.
Its not an "idea about how it works". It is how it works.
Can you not just admit you learned something here? Or do you just have to argue with everything to try and appear right?
What's wrong with, "oh, I didn't know that. How interesting!"
I think it was the part about how the training data gets poisoned that was the interesting idea.
It is also the reality we're living in however.
Based on their behaviour, I'm not so sure. It seemed to me to be a way of saying "that's maybe not true, but it's fun to think about". At least, that's how I'd use the phrase "that's an interesting idea". If I just found it interesting, I'd say "how interesting!"
But yes, it is indeed fascinating how LLMs work.
are you a robot?
Let's rewind to before that desperate (and likely spontaneous) accusation, and I'll give you another chance to reply in a normal manner.
No deflection. Just admit you didn't know LLMs scrape social media. That's all. It's okay; we don't come into this world with all of its knowledge.
I actually did have a vague idea in that general direction.
But that's rather beside my point. I mean, the AI definitely offered these answers. The answers are definitely gender biased. Offering that it's merely an artifact of the LLM technology is definitely a terrible excuse for that.
And given that LLMs are well known to be tweaked to align better with the philosophical styles of the hour, doubly so.
So, once again, you double down. No, you obviously didn't know that, and clearly still don't actually understand it, since you're claiming it was engineered to push a narrative (what narrative you won't say, but i bet it rhymes with bliss gandry).
Lastly, it's not an "excuse". It's an explanation. Calling it an "excuse" is just another attempt to deflect the answer and avoid being wrong.
Well given that the bias is generally considered a bad thing, explanations absolving them of responsibility for the badness are what is generally called an "excuse".
It's not absolving them of responsibility. They're responsible for how they train their data - it doesn't need to be trained on social media.