Social Media Firms Privately Discussed Teen Engagement

▼ Summary
– Recently released legal documents reveal social media companies saw significant business value in recruiting teen users and internally discussed the potential harms of heavy platform engagement.
– Internal communications, such as a 2016 Meta email, show companies explicitly prioritized teen growth, with strategies like teen ambassador programs and features mimicking private “Finsta” accounts.
– The companies were aware of risks, including PR concerns about youth safety and research indicating features like autoplay could disrupt sleep, but weighed these against business opportunities.
– Documents show platforms knew young users often violated age or usage rules, with studies finding many teens used apps in school or were under the minimum age.
– While recognizing issues like compulsive use, companies considered mitigation tools like usage limits and framed such safeguards as beneficial for both user wellbeing and sustainable business growth.
Recently unsealed court documents reveal a stark internal perspective at major social media companies, highlighting a deliberate focus on attracting younger users as a core business strategy while simultaneously grappling with the known risks of such engagement. These records, part of a massive legal action brought by school districts and state attorneys general against Meta, Snap, TikTok, and YouTube, show how platforms privately discussed the immense value of teen adoption alongside concerns about potential harm. A federal judge is set to hear arguments that will shape the scope of these upcoming trials, which allege that product designs have negatively impacted young people’s mental health.
The internal communications, compiled in a report by the Tech Oversight Project, illustrate a clear business imperative. An email from late 2016 stressed that Meta CEO Mark Zuckerberg had decided the “top priority for the company in H1 2017 is teens,” discussing initiatives like a teen ambassador program for Instagram. The company even explored formalizing the trend of “Finstas” by considering a private Facebook mode that mimicked the appeal of secondary accounts: smaller audiences and plausible deniability. Similarly, a Google slide from November 2020 was bluntly titled “Solving Kids is a Massive Opportunity,” noting that children under 13 represent the world’s fastest-growing online audience. Internal research concluded that family users lead to better retention and greater overall value, with the company acknowledging that student use of Chromebooks in schools made future Google product purchases more likely.
Alongside this strategic focus, the documents show companies were acutely aware of public relations and safety risks. Meta employees in 2016 debated the dangers of its under-21 app Lifestage, with one raising a critical concern: verifying user ages to prevent impersonation and predation would be extremely difficult. By 2018, Meta recognized it might need to delay allowing younger tweens onto Facebook due to “increased scrutiny of whether Facebook is good for Youth.” Google also internally noted that “tech addiction and Google’s role has been making the news,” with a 2018 presentation acknowledging that features like autoplay could be “disrupting sleep patterns,” a function now disabled for users under 18.
Awareness extended to how children were actually using the platforms. A 2017 study for Snap found 64% of its 13-21 year-old users accessed the app during school hours. A redacted 2020 TikTok chat log showed a relieved reaction that news crews missed an event where panelists were “primarily under 13” and discussing their awareness that they were not supposed to have accounts.
Despite these known issues, the records also reveal internal discussions about potential mitigations. A 2023 Snap presentation, based on a study with users, parents, and wellness experts, suggested features like allowing users to disable social media during school or set personal time limits after finding many teens reported being online “all the time.” A 2021 TikTok document admitted compulsive use was “rampant” on its platform, arguing this made it necessary to provide users with better tools to manage their time effectively. The company framed high active engagement as preferable, citing research that passive social media use is more harmful.
Some within the companies even argued that user safeguards could align with long-term business interests. A 2019 Google document proposed disincentivizing “growth that doesn’t support wellbeing,” positing that investing in digital well-being would strengthen its brand and create “a more sustainable path for growth.” In response to the litigation, the companies have uniformly emphasized their commitment to teen safety. Meta has published a webpage contesting the plaintiffs’ evidence, pointing to research showing minimal links between platform use and mental well-being or highlighting other societal factors.
(Source: The Verge)





