563
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 31 Aug 2023
563 points (98.3% liked)
Technology
59390 readers
2938 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
One way to make an A.I. model forget the things it learns from private user data is to use a technique called differential privacy. Differential privacy is a mathematical framework that adds carefully calibrated noise to the data or the model outputs, so that the privacy of individual users is preserved, while the overall accuracy of the model is maintained. This means that the A.I. model cannot learn any specific information about any user, but can still perform its intended task on aggregate data.
Another way to make an A.I. model forget the things it learns from private user data is to use a technique called federated learning. Federated learning is a distributed approach that allows multiple A.I. models to learn from local data on different devices, without sending the data to a central server. This means that the A.I. models only share their updates or parameters with each other, not the raw data, and thus protect the privacy of the users.
However, both of these techniques have some limitations and challenges. For example, differential privacy may require a lot of data and computation to achieve a good balance between privacy and accuracy. Federated learning may face issues such as communication overhead, device heterogeneity, and malicious attacks. Moreover, both of these techniques do not guarantee that the A.I. model will completely forget the things it learns from private user data, as there may still be some traces or influences left in the model’s behavior or performance.
Therefore, it is not fair to say that it is virtually impossible to make an A.I. model forget the things it learns from private user data, but it is certainly very difficult and requires careful design and evaluation. There may also be some trade-offs between privacy, accuracy, efficiency, and security that need to be considered.
^^^^ According to Bing Chat
none of this "forgets" it's remaking the model
This is absolute BS
Did you read the post all the way through?
Yes.
Then you're missing the point.