563
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 31 Aug 2023
563 points (98.3% liked)
Technology
60033 readers
3323 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
I feel like one way to do this would be to break up models and their training data into mini-models and mini-batches of training data instead of one big model, and also restricting training data to that used with permission as well as public domain sources. For all other cases where a company is required to take down information in a model that their permission to use was revoked or expired, they can identify the relevant training data in the mini batches, remove it, then retrain the corresponding mini model more quickly and efficiently than having to retrain the entire massive model.
A major problem with this though would be figuring out how to efficiently query multiple mini models and come up with a single response. I'm not sure how you could do that, at least very well...
You could certainly break up training data, but breaking up the models into mini models based on which training data is used wouldn't work with neural networks trained using gradient descent. Basically whatever the state of the model is it depends on the totality of the training data that it has been trained on (and the order) and it isn't possible to go and remove the effect of a specific training data point without then retraining for all of the data that followed that data point (and even that assumes you were storing a snapshot of the model before every single training data point, which I doubt anyone does)
However, that's no excuse and it is of course possible to entirely retrain a network using a clean dataset and that is what these companies should do
Am I correct in assuming that sounds a bit like libraries used in programming?
I believe this is how the Tesla FSD beta AI works.