492
submitted 8 months ago by ylai@lemmy.ml to c/technology@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] HaywardT@lemmy.sdf.org 0 points 8 months ago

Interesting article. It seems to be about a bug, not a designed behavior. It also says it exposes random excerpts from books and other training data.

[-] Linkerbaan@lemmy.world -1 points 8 months ago

It's not designed to do that because they don't want to reveal the training data. But factually all neural networks are a combination of their training data encoded into neurons.

When given the right prompt (or image generation question) they will exactly replicate it. Because that's how they have been trained in the first place. Replicating their source images with as little neurons as possible, and tweaking them when it's not correct.

[-] HaywardT@lemmy.sdf.org 3 points 8 months ago

That is a little like saying every photograph is a copy of the thing. That is just factually incorrect. I have many three layer networks that are not the thing they were trained on. As a compression method they can be very lossy and in fact that is often the point.

this post was submitted on 15 Mar 2024
492 points (95.4% liked)

Technology

59312 readers
5482 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS