386
GPT-4 is getting worse over time, not better.
(twitter.com)
This is a most excellent place for technology news and articles.
One theory that I've not seen mentioned here is that there's been a lot of work based around multiple LLMs in communication. Of these were used in the RL loop we could see similar degradatory effects as those that have recently been in the news with regards to image generation models.