this post was submitted on 21 Jan 2024
824 points (95.1% liked)

Technology

67241 readers
4343 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Even_Adder@lemmy.dbzer0.com 42 points 1 year ago* (last edited 1 year ago) (3 children)

I've only heard that running images through a VAE just once seems to break the Nightshade effect, but no one's really published anything yet.

You can finetune models on known bad and incoherent images to help it to output better images if the trained embedding is used in the negative prompt. So there's a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

[–] watersnipje@lemmy.blahaj.zone 11 points 1 year ago (1 children)
[–] Batman@lemmy.world 10 points 1 year ago (1 children)

Think they mean a Variational AutoEncoder

[–] KeenFlame@feddit.nu 2 points 1 year ago

Variable. But no running it through that will not break any effect

[–] sukhmel@programming.dev 5 points 1 year ago

So there's a chance that making a lot of purposefully bad data could actually make models better by helping the model recognize bad output and avoid it.

This would be truly ironic

[–] HelloHotel@lemmy.world 2 points 1 year ago* (last edited 1 year ago)

If users have verry much control and we can coordinate then you could gaslight the AI into a screwed up alternate reality