this post was submitted on 17 Mar 2025
446 points (99.1% liked)

Technology

66783 readers
6032 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

China has released a set of guidelines on labeling internet content that is generated or composed by artificial intelligence (AI) technology, which are set to take effect on Sept. 1.

you are viewing a single comment's thread
view the rest of the comments
[–] umami_wasbi@lemmy.ml 4 points 1 day ago* (last edited 1 day ago) (1 children)

The problem is you can't make a digital label that hard to circumvent. Much like a signature, you sign something you want to prove it is genuinely from you, but you won't sign something that's not from you while not signing things that are, especially in digital format. Digital signature can just be stripped out of the data. Watermarks on images can now patched with the help of inpainting models. Disclaimers in text can just be deleted. The default shouldn't be "This thing doesn't have an AI label so it would be written by human." The label itself it a slippery slope that helps misinformation spread faster and aid building alternate facts. Adding a label won't help people identify contents generated with ML models, but let them defer the identification to that mere label because it said so, or didn't.

Misinformation didn't spread fast simply because fascists obtained controls on medias. Just look at how China, Russia, and Iran launch misinformation campaigns. They didn't have to control those media, but some seed accounts that make sensational title that attracts people in more powerful position and recognition to spread it out. For more info on misinformation and disinformation, I recommend you watch Ryan McBeth's video on YT.

Yes, we need a way to identify what is and what not generated by ML models, but that should not be done by labeling ML contents.

[–] LadyAutumn@lemmy.blahaj.zone 1 points 1 day ago (1 children)

I'm curious what you would suggest to aid identifying generated content if not clear labeling. Sure its circumventable but again its more than what already exists. It provides legal precedence for repercussions to companies trying to pass off AI generated content as human created.

[–] umami_wasbi@lemmy.ml 2 points 1 day ago* (last edited 1 day ago)

Please allow me to have a little bit of time deep thoughts and organize myself. It might take a while, but I will give you a response.