this post was submitted on 09 Jun 2025
25 points (85.7% liked)
Technology
221 readers
253 users here now
Share interesting Technology news and links.
Rules:
- No paywalled sites at all.
- News articles has to be recent, not older than 2 weeks (14 days).
- No videos.
- Post only direct links.
To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:
- Al Jazeera.
- NBC.
- CNBC.
- Substack.
- Tom's Hardware.
- ZDNet.
- TechSpot.
- Ars Technica.
- Vox Media outlets, with exception for Axios(Due to being ad free.)
- Engadget.
- TechCrunch.
- Gizmodo.
- Futurism.
- PCWorld.
- ComputerWorld.
- Mashable.
More sites will be added to the blacklist as needed.
Encouraged:
- Archive links in the body of the post.
- Linking to the direct source, instead of linking to an article talking about the source.
founded 1 month ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oh that's neat and makes sense.
iiuc, basically when neural networks have enough neurons / pathways to encode a small amount of information, you have each neuron being used to encode a single 'feature', but as the amount of things it needs to encode for grows, it has to use individual neurons to encode multiple different things which entangles those concepts together.
So if you don't have toxic training data, then the general pattern of toxicity isn't very strong in the training data, so the concept of toxicity gets entangled with lots of other stuff, then when you tell the model to not be toxic it avoids a bunch of useful things.
If you instead feed it enough toxic data during training (but not too much), then the pattern of toxicity is more strongly isolated in the neuron encoding and less entangled with everything else, so when you tell it to not be toxic it doesn't impact everything else as much.
I'm confused by all of this, but why is there a "too much", like if I threw all of the toxic sites into the training data and told the ai "this stuff here is toxic" doesn't ai just figure out the pattern for toxicity and target it precisely?