487
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Feb 2024
487 points (96.2% liked)
Technology
59312 readers
4683 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
Yes, this is exactly correct. And it's not actually too slow - the specialized models can be run quite quickly, and there's various speedups like Groq.
The issue is just more cost of multiple passes, so companies are trying to have it be "all-in-one" even though cognitive science in humans isn't an all-in-one process either.
For example, AI alignment would be much better if it took inspiration from the prefrontal cortex inhibiting intrusive thoughts rather than trying to prevent the generation of the equivalent of intrusive thoughts in the first place.
Exactly, that's where the too slow part comes in. To get more robust behavior it needs multiple layers of meta analysis, but that means it would take way more text generation under the hood than what's needed for one shot output.
Yes, but in terms of speed you don't need the same parameters and quantization for the secondary layers.
If you haven't seen it, see how fast a very capable model can actually be: https://groq.com/
Yeah I've seen that. I think things will get much faster very quickly, I'm just commenting on the first Gen tech we're seeing right now.