view the rest of the comments
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
I'm failing to see how it's snapchats problem, it can't know that the person was nefarious, and it's not reasonable to expect that it should have been able to know. This is like saying that Disney should be held responsible because someone decided to go on a killing spree while using the recommended costume of the week. It's two isolated events that happens to coencide with eachother.
this is a failure on the parents side all the way down, from the lack of supervision to the allowance of making a social media account below the legal age to do so.
Snapchat is not the only problem here, but it is a problem.
If they can't guarantee their recommendations are clean, they shouldn't be offering recommendations. Even to adults. Let people find other accounts to connect to for themselves, or by consulting some third party's curated list.
If not offering recommendations destroys Snapchat's business model, so be it. The world will continue on without them.
It really is that simple.
Using buggy code (because all nontrivial code is buggy) to offer recommendations only happens because these companies are cheap and lazy. They need to be forced to take responsibility where it's appropriate. This does not mean that they should be liable for the identity of posters on their network or the content of individual posts—I agree that expecting them to control that is unrealistic—but all curation algorithms are created by them and are completely under their control. They can provide simple sorts based on data visible to all users, or leave things to spread externally by word of mouth. Anything beyond that should require human verification, because black box algorithms demonstrably do not make good choices.
It's the same thing as the recent Air Canada chatbot case: the company is responsible for errors made by its software, to about the same extent as it is responsible for errors made by its employees. If a human working for Snapchat had directed "C.O." to the paedophile's account, would you consider Snapchat to be liable (for hiring the kind of person who would do that, if nothing else)?
No i would not, unless it was proven that said employee knew the person was an S.O and knew that the account was a minor (but at that point the employee should have disabled the account per Snapchats policy regardless). If that data was not available to them, then they wouldn't have the capability to know so I would concider it not at fault.
Then, in my opinion, you would have failed to perform due diligence. Even if you'd thought C.O. was an adult, suggesting a woman strike up a private conversation with a man neither of you know is always something that deserves a second look (dating sites excepted), because the potential for harm is regrettably high.
It isn't, and the courts agreed with that. Seems like an issue with legislative law. As far as I was aware, sex offenders were suppose to have Internet restrictions...
Could there be a good discussion to try and prevent harm to further children? Well, yeah. Some parents just suck and it's the kid that gets hurt.
As long as it doesn't involve stuff like kosa which puts more people on harms way.