BigTech CompaniesCybersecurityNewswireTechnology

NY May Require TikTok, YouTube, and Instagram to Verify Users’ Ages

▼ Summary

– New York’s SAFE For Kids Act requires social media platforms to verify users are over 18 before granting access to algorithm-driven feeds or nighttime notifications.
– The law aims to protect children’s mental health by restricting addictive features and is part of a broader trend of US online child safety legislation.
– Platforms must offer unverified users or minors under 18 chronological feeds and ban notifications from midnight to 6 AM, with flexible age verification methods that protect user data.
– Companies violating the law face fines up to $5,000 per violation, and it applies to platforms like Instagram and TikTok where users spend significant time on algorithmic feeds.
– The law faces a lengthy implementation process including a 60-day public comment period and potential legal challenges from groups citing free speech concerns.

Navigating the digital world safely is a growing concern for parents and lawmakers alike. A significant legislative push in New York aims to reshape how young people interact with social media, placing new responsibilities on tech giants to verify user ages and limit exposure to potentially harmful content.

Under the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, platforms like TikTok, Instagram, and YouTube may soon be required to confirm that users are over 18 before granting access to algorithmically curated feeds or after-hours notifications. New York Attorney General Letitia James introduced the proposed regulations this week, building on a law signed by Governor Kathy Hochul last year designed to safeguard youth mental health.

This initiative is part of a broader national trend, with states including California, South Dakota, and Wyoming also advancing measures to enforce age verification online. Recent Supreme Court decisions have begun to clarify the legal boundaries for such requirements, particularly concerning adult content, though implementation remains complex.

The proposed rules specify that unverified users or those under 18 must be shown chronological feeds, displaying posts only from accounts they follow, rather than content selected by engagement-driven algorithms. Notifications would be prohibited between midnight and 6 a.m., though officials are still refining how “nighttime notifications” will be defined.

To verify age, companies can use various methods, provided they are both effective and protective of user data. Importantly, platforms must offer at least one alternative to submitting a government ID, such as facial age estimation technology. Parental permission is required for minors to access algorithmic feeds, involving a separate verification step. All personally identifiable information collected during these checks must be deleted immediately after confirmation.

The law targets platforms where users spend a significant portion of their time, specifically, at least 20 percent, on addictive feeds, which are defined as those generating content based on user or device data. Violations could result in fines of up to $5,000 per incident.

In a statement, Attorney General James emphasized the urgency of the issue, noting that “children and teenagers are struggling with high rates of anxiety and depression because of addictive features on social media.” She believes these rules will help address the youth mental health crisis and create a safer online environment.

However, the path to enforcement is not immediate. A 60-day public comment period is now underway, followed by a year for finalizing the rules. The law will take effect 180 days after that, though legal challenges are anticipated. Trade group NetChoice, which has opposed similar legislation nationwide, previously criticized the SAFE Act as an “assault on free speech,” while the Electronic Frontier Foundation raised concerns about unintended restrictions on adult access to constitutionally protected content.

(Source: The Verge)

Topics

age verification 95% social media 90% algorithmic feeds 88% child safety 87% mental health 85% privacy concerns 82% legal challenges 80% government regulation 78% parental consent 75% notification restrictions 73%