UpScrolled’s Fast Growth Fuels Hate Speech Moderation Crisis

▼ Summary
– UpScrolled, a social network that grew rapidly after TikTok’s U.S. ownership change, is struggling with a serious content moderation problem, including the proliferation of racial slurs and hate speech in usernames and hashtags.
– TechCrunch investigation confirmed the widespread presence of offensive usernames and content glorifying figures like Hitler, and reported accounts remained active days after being flagged to the company.
– The ADL also reported that UpScrolled has become a home for antisemitic, extremist content and designated foreign terrorist organizations like Hamas.
– While UpScrolled’s public policy prohibits hate speech and harmful content, the company acknowledges it is struggling to enforce these rules and is expanding its moderation team and technology.
– The founder stated the platform aims for a respectful digital environment but faces a common challenge for networks experiencing rapid user growth, similar to past issues seen on platforms like Bluesky.
The rapid ascent of UpScrolled, a social media platform that gained significant traction following TikTok’s ownership changes, has been overshadowed by a mounting crisis in content moderation. Despite reaching over 2.5 million users, the platform is struggling to control a proliferation of hate speech and racial slurs within usernames and hashtags. This failure to enforce its own community standards raises serious questions about its operational capacity during a period of explosive growth.
Independent investigation confirms the widespread use of explicitly racist terms in user profiles. These usernames range from direct slurs to combinations of hateful language, including phrases that glorify historical atrocities. When these violations were formally reported, the company’s public response acknowledged the issue, stating it was “actively reviewing and removing inappropriate content” while expanding its moderation team. The advice offered was to avoid engaging with malicious accounts. However, follow-up checks days later revealed that the specific accounts reported, supported by evidence, remained active on the platform.
The problem extends far beyond usernames. A review of the app found that hate speech and extremist material are also prevalent within post captions, hashtags, and multimedia content. This includes text posts filled with racial slurs and imagery that promotes Nazi ideology. The issue has drawn attention from major advocacy organizations. The Anti-Defamation League recently highlighted that UpScrolled has become a haven for antisemitic content and material linked to designated foreign terrorist organizations.
Founded in 2025, UpScrolled promotes a principle of giving every voice “equal power.” This ethos has resonated with users, driving more than 4 million downloads across mobile app stores, a figure that exceeds the company’s own earlier reports. Its published policies explicitly prohibit hate speech, harassment, and content intended to cause harm, aligning with industry norms. Yet there is a clear and growing gap between its stated rules and their enforcement on the ground.
This challenge is not unique; other social networks have faced similar turmoil during sudden growth spurts. The core dilemma involves scaling moderation efforts as quickly as user acquisition. In response to mounting criticism, the company’s founder addressed the controversy in a public video. He admitted that users are posting “harmful content” that violates the platform’s terms of service and stated the company is working to rapidly grow its moderation team and upgrade its technical infrastructure to remove such material more effectively. The success of these efforts will be critical for the platform’s future viability and user trust.
(Source: TechCrunch)





