BigTech CompaniesBusinessNewswireTechnologyWhat's Buzzing

TikTok’s Age Verification: A Global Compromise?

Originally published on: January 23, 2026
▼ Summary

– Governments globally are implementing stricter age-based regulations to limit children’s access to social media platforms.
– TikTok is rolling out a new age-detection system in Europe that uses profile data and behavior analysis to flag, not automatically ban, suspected underage accounts for human review.
– Countries like Australia have banned social media for under-16s, while the EU and others are debating similar mandatory age limits and bans.
– Advocacy groups and new laws, especially in the US, are pushing for widespread online age authentication, which a legal expert predicts will become a global legal infrastructure.
– Critics argue that age-verification methods like TikTok’s constitute increased digital surveillance and may be ineffective or harmful, with scalability concerns for other platforms.

Governments across the globe are increasingly taking action to restrict children’s access to social media, driven by concerns that platforms cannot reliably enforce their own age policies. TikTok has now joined other major tech companies in responding to this regulatory push, announcing a new system to detect and restrict users under 13 across Europe. This move follows a year-long test in the United Kingdom designed to proactively find and remove underage accounts.

The company’s approach uses a blend of profile information, content analysis, and behavioral signals to assess whether an account is likely operated by a minor. Importantly, the system does not issue automatic bans. Instead, it flags accounts suspected of belonging to users under 13 and sends them to human review teams for a final decision. TikTok has stated that this process is meant to keep its platform, which officially requires users to be at least 13, safer for younger audiences.

This European expansion occurs against a backdrop of intense international debate regarding social media’s impact on youth. Several countries are exploring or enacting stricter, age-based regulations. Australia pioneered a significant policy last year by banning social media access for children under 16, a prohibition covering platforms like Instagram, YouTube, Snapchat, and TikTok. Within the European Union, lawmakers are advocating for mandatory age limits, with Denmark and Malaysia also considering bans for those under 16.

The sentiment was captured by Danish lawmaker Christel Schaldemose, a European Parliament vice president, who recently argued for an EU-wide prohibition on access for children under 16 to online platforms without parental consent, and an outright ban for those under 13. She described the current situation as a massive, unsupervised experiment where tech giants have unlimited daily access to children’s attention.

Similar concerns are driving advocacy in other nations. In Canada, groups are calling for a new regulatory body focused on online harms targeting youth, a push intensified by incidents involving AI-generated content. Even AI services like ChatGPT are deploying age-prediction tools to apply appropriate safeguards. In the United States, legislative activity is surging, with 25 states having passed some form of age-verification law.

Legal expert Eric Goldman, a professor at Santa Clara University, predicts a wave of new legislation. “Legislatures in the US, just in the calendar year 2026, are likely to pass dozens or possibly hundreds of new laws requiring online age authentication,” he states. He cautions that such government-mandated measures should be viewed as constitutionally suspect forms of compelled censorship. Goldman foresees a global trend, noting that regulators are constructing a legal framework that will eventually require age authentication for most websites and apps.

This raises a critical question: as platforms scramble to comply, does TikTok’s method of monitoring and reviewing, rather than issuing instant bans, represent a reasonable middle ground? The answer largely hinges on one’s perspective regarding digital surveillance.

Goldman characterizes the strategy as sophisticated surveillance, where TikTok monitors user activity to draw inferences about them. He labels broad age-verification mandates as “segregate-and-suppress laws,” warning that policy solutions can sometimes expose children to greater risks rather than protecting them. He also points out practical limitations: users may resent the increased monitoring, and false positives, mistakenly identifying an adult as a child, could have serious repercussions for the affected individual.

Furthermore, Goldman notes that while this data-intensive approach might be feasible for a platform like TikTok, most online services lack the depth of user information needed to make reliable age guesses. This inherent limitation means the model is not easily scalable across the broader digital ecosystem, leaving a significant challenge for smaller platforms facing the same regulatory demands.

(Source: Wired)

Topics

social media regulation 95% age verification 93% child protection 90% regulatory pressure 88% global legislation 87% digital surveillance 85% tech giants 83% age detection systems 82% platform governance 80% online harms 78%