393
submitted 5 months ago by neme@lemm.ee to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] Rhaedas@fedia.io 24 points 5 months ago

The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn't going to happen from LLMs alone. It's interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn't want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it's hard to keep up).

[-] dan1101@lemm.ee 1 points 5 months ago

Yeah the hallucinations could be very useful for art and creative stepping stones. But not as much for factual information.

this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

59590 readers
2824 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS