this post was submitted on 31 May 2025
7 points (88.9% liked)

Technology

106 readers
106 users here now

Share interesting Technology news and links.

Rules:

  1. No paywalled sites at all.
  2. News articles has to be recent, not older than 2 weeks (14 days).
  3. No videos.
  4. Post only direct links.

To encourage more original sources and keep this space commercial free as much as I could, the following websites are Blacklisted:

Encouraged:

founded 3 weeks ago
MODERATORS
 

For years, when Meta launched new features for Instagram, WhatsApp and Facebook, teams of reviewers evaluated possible risks: Could it violate users' privacy? Could it cause harm to minors? Could it worsen the spread of misleading or toxic content?

Until recently, what are known inside Meta as privacy and integrity reviews were conducted almost entirely by human evaluators.

But now, according to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.

In practice, this means things like critical updates to Meta's algorithms, new safety features and changes to how content is allowed to be shared across the company's platforms will be mostly approved by a system powered by artificial intelligence — no longer subject to scrutiny by staffers tasked with debating how a platform change could have unforeseen repercussions or be misused.

Inside Meta, the change is being viewed as a win for product developers, who will now be able to release app updates and features more quickly. But current and former Meta employees fear the new automation push comes at the cost of allowing AI to make tricky determinations about how Meta's apps could lead to real world harm.

"Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you're creating higher risks," said a former Meta executive who requested anonymity out of fear of retaliation from the company. "Negative externalities of product changes are less likely to be prevented before they start causing problems in the world."

Meta said in a statement that it has invested billions of dollars to support user privacy.

Since 2012, Meta has been under the watch of the Federal Trade Commission after the agency reached an agreement with the company over how it handles users' personal information. As a result, privacy reviews for products have been required, according to current and former Meta employees.

In its statement, Meta said the product risk review changes are intended to streamline decision-making, adding that "human expertise" is still being used for "novel and complex issues," and that only "low-risk decisions" are being automated.

But internal documents reviewed by NPR show that Meta is considering automating reviews for sensitive areas including AI safety, youth risk and a category known as integrity that encompasses things like violent content and the spread of falsehoods.

no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here