125
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Authors demand credit and compensation from AI companies using their work without permission | OpenAI, Alphabet, and Meta have been called out::The letter, published by professional writers' organization The Authors Guild, is addressed to the bosses of OpenAI, Alphabet, Meta, Stability AI, IBM, and Microsoft. It calls out...

you are viewing a single comment's thread
view the rest of the comments
[-] fubo@lemmy.world 31 points 1 year ago* (last edited 1 year ago)

To date, it remains legal for humans to borrow a book from the library, read it, learn skills & knowledge from it, and apply what they've learned to make money — without ever paying the author.

Copyright does not, in general, grant control over the ideas in a work; only their specific expression. It deals with copying the text; it is not a tax on the information or knowledge contained in that text.

It also does not assure the author or publisher of a share of all revenues that anyone is ever able to make using the knowledge recorded in a work.

[-] Temperche@feddit.de 5 points 1 year ago

I suspect the problem is that AI copies whole sentences that were originally published by authors - not that it just "learns" from it.

[-] fubo@lemmy.world 9 points 1 year ago

A while back I tried to get ChatGPT to recite the text of Harry Potter and the Philosopher's Stone to me, as a test of just how much copyrighted text it's willing to recite.

It got partway through the first sentence before freezing up, presumably due to a sensitivity to copyright.

So I suspect that at least OpenAI are taking significant steps already to prevent their systems from reciting copyrighted text verbatim.

[-] average650@lemmy.world 2 points 1 year ago

I just tried it with bing chat and it actually explains it can't because it would violate the authors copyright.

[-] CoderKat@lemm.ee 3 points 1 year ago

But it doesn't copy full sentences. If they did, maybe they wouldn't be such black boxes. They build this utterly insanely huge matrix of data that are basically just weights for parameters and there's billions of parameters (which make up the entirety of what the LLM "knows" or "can know"). It's closest to text prediction. Even though it doesn't know full sentences, if the sentence was used enough times, it can predict the rest of it. It can even do that without having scraped a book, simply because it scraped something else (likely many something elses) that had the quote.

load more comments (12 replies)
this post was submitted on 19 Jul 2023
125 points (98.4% liked)

Technology

59467 readers
3655 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS