125
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 19 Jul 2023
125 points (98.4% liked)
Technology
59467 readers
3655 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
To date, it remains legal for humans to borrow a book from the library, read it, learn skills & knowledge from it, and apply what they've learned to make money — without ever paying the author.
Copyright does not, in general, grant control over the ideas in a work; only their specific expression. It deals with copying the text; it is not a tax on the information or knowledge contained in that text.
It also does not assure the author or publisher of a share of all revenues that anyone is ever able to make using the knowledge recorded in a work.
I suspect the problem is that AI copies whole sentences that were originally published by authors - not that it just "learns" from it.
A while back I tried to get ChatGPT to recite the text of Harry Potter and the Philosopher's Stone to me, as a test of just how much copyrighted text it's willing to recite.
It got partway through the first sentence before freezing up, presumably due to a sensitivity to copyright.
So I suspect that at least OpenAI are taking significant steps already to prevent their systems from reciting copyrighted text verbatim.
I just tried it with bing chat and it actually explains it can't because it would violate the authors copyright.
But it doesn't copy full sentences. If they did, maybe they wouldn't be such black boxes. They build this utterly insanely huge matrix of data that are basically just weights for parameters and there's billions of parameters (which make up the entirety of what the LLM "knows" or "can know"). It's closest to text prediction. Even though it doesn't know full sentences, if the sentence was used enough times, it can predict the rest of it. It can even do that without having scraped a book, simply because it scraped something else (likely many something elses) that had the quote.