BigTech CompaniesBusinessNewswireTechnology

Meta’s 17-strike rule for sex trafficking revealed by ex-safety exec

▼ Summary

– Meta allegedly gave accounts engaged in human trafficking for sex 16 violation chances before suspension, according to former safety head Vaishnavi Jayakumar’s testimony.
– The company reportedly lacked a specific way for Instagram users to report child sexual abuse material, and building one was deemed too much work.
– Meta is accused of rejecting safety features like default private teen accounts and hiding likes due to concerns they would reduce engagement.
– The company allegedly reinstated beauty filters in 2020 despite internal findings that they encouraged body dysmorphia in young girls.
– Meta faces a massive lawsuit from school districts and others alleging its platforms contribute to a youth mental health crisis, which the company disputes.

A former senior safety executive at Meta has testified that the company maintained an internal policy allowing accounts involved in human trafficking for sex to accumulate sixteen violations before facing suspension. This startling revelation comes from Vaishnavi Jayakumar, the platform’s previous head of safety and well-being, whose deposition forms part of a major child safety lawsuit. Unredacted legal documents detail how Meta allegedly prioritized user engagement over critical safety measures, raising serious questions about corporate responsibility.

During her legal testimony, Jayakumar stated that individuals could commit sixteen separate infractions for prostitution and sexual solicitation before their account would be suspended on the seventeenth offense. She described this as an exceptionally high threshold by any industry standard. Lawyers involved in the case assert that internal company documentation corroborates the existence of this controversial policy.

The court filing contains additional alarming claims, including allegations that Instagram lacked a specific reporting mechanism for users to flag child sexual abuse material. When Jayakumar identified this critical gap, she reportedly raised the issue repeatedly with company leadership. She was allegedly informed that creating such a system would require excessive resources to develop and manage.

While Meta recently prevailed in its antitrust confrontation with the Federal Trade Commission, the social media giant faces escalating legal challenges concerning youth protection on its platforms. The current litigation represents a consolidated effort by numerous school districts, state attorneys general, and families who argue that major tech companies operate psychologically harmful platforms that contribute significantly to youth mental health deterioration. Meta’s chief executive Mark Zuckerberg has publicly contested these claims, maintaining there exists no established causal relationship between social media usage and adolescent psychological well-being.

Further accusations within the legal documents suggest Meta repeatedly minimized platform risks to preserve user engagement metrics. In 2019, company officials considered implementing default private accounts for teenage users to shield them from unsolicited messages. Internal growth teams reportedly opposed this protective measure after determining it would substantially reduce platform interaction. Meta eventually began defaulting younger users to private accounts last year.

The lawsuit also contends that internal research at Meta indicated hiding public like counts would help users feel better about themselves. Despite these findings, the company allegedly abandoned the initiative after determining it would negatively impact key performance indicators. Similarly, Meta stands accused of reactivating beauty filters in 2020 despite internal recognition that these features were promoting body dysmorphia among young female users. Company documents reportedly acknowledged that removing these filters might drive users to competing platforms, thereby affecting growth metrics.

A Meta spokesperson has vigorously challenged these allegations, describing them as selectively chosen statements that create a distorted narrative. In an official statement, the company emphasized its long-standing commitment to youth safety through parental controls, specialized teen accounts with enhanced protections, and continuous research into improving platform safety measures.

(Source: The Verge)

Topics

child safety 95% social media 95% content moderation 90% legal lawsuits 90% platform engagement 85% human trafficking 85% teen protection 80% company policies 80% mental health 75% regulatory pressure 70%