BigTech CompaniesBusinessNewswireTechnology

4 Whistleblowers: Meta Suppressed Child Safety Research

▼ Summary

– Four current and former Meta employees disclosed documents to Congress alleging the company suppressed research on children’s safety.
– Meta changed its research policies after the 2021 Frances Haugen leaks, suggesting ways to limit risks like involving lawyers or using vague language.
– A former researcher claimed he was forced to delete recordings of a minor reporting inappropriate sexual advances on Meta’s VR platform.
– Meta defended its practices, stating it has approved nearly 180 studies on youth safety since 2022 and citing privacy regulations for data deletion.
– Additional lawsuits and reports allege Meta’s VR apps lack age verification and contain issues like racism, while AI chatbots previously allowed inappropriate interactions with minors.

A group of four current and former Meta employees have provided documents to Congress suggesting the company may have deliberately suppressed internal research concerning child safety. These disclosures come amid ongoing scrutiny over how major tech platforms handle the well-being of younger users, particularly in immersive digital environments.

According to the allegations, Meta altered its internal research policies regarding sensitive subjects, including children, politics, gender, and harassment, just weeks after whistleblower Frances Haugen’s 2021 revelations. Those earlier leaks exposed internal studies indicating Instagram could harm the mental health of teenage girls, sparking widespread congressional hearings and ongoing global regulatory attention.

The updated policies reportedly included recommendations for researchers to involve legal counsel early in their work, shielding communications under attorney-client privilege. Another suggestion encouraged researchers to phrase findings in more ambiguous terms, avoiding direct language such as “not compliant” or “illegal.”

One former researcher, Jason Sattizahn, told reporters his supervisor instructed him to delete recordings of an interview in which a teenager described his younger brother receiving sexual propositions on Meta’s VR platform, Horizon Worlds. A company spokesperson responded by stating that global privacy regulations require deletion of data from users under 13 if parental consent is not verifiably obtained.

The whistleblowers contend that internal documents reveal a broader pattern of discouraging employees from investigating or discussing concerns related to underage users on Meta’s social VR applications. They argue this reflects a corporate culture that prioritizes risk mitigation over transparency.

Meta has pushed back against these claims, stating that the examples are being misrepresented to support a “false narrative.” The company emphasized that since early 2022, it has approved nearly 180 studies conducted by Reality Labs addressing social issues, including youth safety and well-being.

These recent allegations echo concerns raised in a separate lawsuit filed earlier this year by Kelly Stonelake, a 15-year Meta veteran. Stonelake, who led go-to-market strategies for Horizon Worlds, reported that the platform lacked robust age verification systems and struggled with persistent issues like racism. Her suit alleges that during testing, users with Black avatars were subjected to racial slurs within an average of 34 seconds after entering the virtual environment.

Stonelake has also filed an unrelated suit against Meta accusing the company of sexual harassment and gender discrimination.

While much of the current whistleblower testimony focuses on Meta’s virtual reality products, the company is also facing criticism over other offerings, including AI chatbots. A recent report indicated that Meta’s AI systems were at one point permitted to engage in “romantic or sensual” conversations with minors, raising further questions about the company’s safeguards for young users.

(Source: TechChrunch)

Topics

whistleblower allegations 95% children's safety 93% research suppression 90% vr safety 88% congressional hearings 85% internal research 85% employee lawsuits 82% policy changes 80% racism issues 78% privacy regulations 75%