361
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 02 Aug 2023
361 points (94.1% liked)
Technology
59287 readers
4257 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
the models are also getting larger (and require even more insane amounts of resources to train) far faster than they are getting better.
I disagree, with models such as llama it has become clear that there are interesting advantages on increasing (even more) the ratio of parameters/data. I don't think next iterations of models from big-corp will 10x the param count until nvidia has really pushed hardware, models are getting better over time. ChatGPT's deterioration is mostly coming from openAI's ensuring safety and is not a fair assessment of progress on LLMs in general, the leaderboard of open source models has been steadily improving over time: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
But bigger models have new "emergent" capabilities. I heard that from a certain size they start to know what they know and hallucinate less.
Wow you heard that crazy bro
One of the papers about it https://arxiv.org/pdf/2206.07682.pdf