The model has become inbred because it’s now impossible to scrape the web without AI content getting ingested, which is full of “hallucinations” and other weird artifacts. The last opportunity to get “uncontaminated” training data was sometime in mid 2022.
Not to say that it’s causing this particular problem, but this issue will emerge eventually. Garbage in = garbage out. Eventually GPT-19 will grow a mighty Habsburg chin.
Maybe not yet, but...
- Spez will turn Reddit into a bot farm and sell this as training data
- Musk turns Twitter into a bigoted cesspool and will sell this as training data, which will subsequently be flagged for low quality (also: a botfarm)
- Threads is a corporate ad dashboard (and we already know how easy it is to GPT copy) and Zuck will sell this as training data
- Facebook is either dead or only good for boomers and Poles
- blogs are dead
- Fediverse is out there waiting to be scraped but possibly too small to sustain a big model
We'te getting there, hopefully.
Scrapped?... Or scraped?
absolutely scraped, fixed
Also We'te
, which I believe is a Klingon name.
I suspect future models are going to have to put some more focus on learning using techniques more like what humans use, and on cognition.
Like, compared to a human these language models need very large quantities of text input. When humans are first learning language they get lots of visual input along with language input, and can test their understanding with trial-and-error feedback from other intelligent actors. I wonder if perhaps those factors greatly increase the rate at which understanding develops.
Also, humans tend to cogitate on inputs while ingesting them during learning. So if the information in new inputs disagrees with current understanding, those inputs are less likely to affect current understanding (there's a whole 'how to change your mind' thing here that is necessary for people to use, but if we're training a model on curated data that's probably less important for early model training).
I don't know details of how model training works, but it would be interesting to know if anyone is using a progressive learning technique where the model that is being trained is used to judge new training data before it is used as a training input to update the model's weights. That would be kind of like how children learn by starting with very simple words and syntax and building up conceptual understanding gradually. I'd assume so, since it's an obvious idea, but I haven't heard about it.
The lobotomies will continue. Free models will keep getting better.
The chatgpt people are really paranoid. Gpt-3 is so good at not halucinating that it often cant, even if it needs to do so to accomplish a task. Fearing the ai will confidently give the wrong answer.
Not the first time OpenAI has done this. DALLE2 used to be the best AI art program in the world. Then OpenAI decided that they didn't want to get sued by celebrities, so they made it so that if a face came out that resembled a celebrity, it would be distorted. But every face kind of looks like someone famous. Ta da! Now DALLE2 can't do faces.
Want a crane shot areal image of a teen couple in a corvette driving off into the sunset? Well, you are now banned for life from the DALLE2 service, because DALLE2 produced an image of a 'shot teen' and that violates it's terms of service.
Dalle2 was always kind of shit tbh.
Dalle2 was great when it was free and stable diffusion didn't exist. I don't see the logic of: "Someone made a free version. Lets make the program worse and charge money for it!"
The only way in mind this dumbing down happens is by fumbling with the model. So that's the one thing we can be sure: the AI is most definitely changed while publicly staying "ChatGPT 4". I assume they are either using clipping or token limitations to split the server load but fucking up the result, or they are purposely dumbing it down to capitalise on it later by introducing other pay models like ppl already mentioned.
Either way they are shooting themselves in the foot because a bunch of ppl will unsubscribe either out of spite for the change or because it's just not worth it anymore for them.
AI taking a running leap at enshittification.
Some people have been saying that since the beginning while some haven’t noticed this “decline”. It seems very subjective.
Honestly as a daily user I think it's a combination of it getting worse at understanding vague prompts and people bumbing up against edge cases more. I would suspect the former is due to things like prompt hardening but can only speculate, while the latter isn't hard to imagine just from frequent use.
lmao I was write back then :D
You mean "I was right* or "i wrote*"?
No no, he used to work as a wright. Built ships and shit.
You know how we have pre-bomb steel? We'll have pre-GPT data sets.
Yeah, when I first started using GPT-4, I didn't notice any hallucinations. Now I'm getting them all the time. Disappointing.
Just like most people after they achieve success.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed