Lawmakers Propose Letting Users Sue Over Harmful Social Media Algorithms

▼ Summary
– Senators Curtis and Kelly introduced the Algorithm Accountability Act to amend Section 230, making platforms liable for recommendation algorithms that cause foreseeable bodily harm or death.
– The bill requires social media platforms to exercise reasonable care in their algorithm design and operation to prevent physical injury, removing Section 230 protections if harm was predictable.
– Victims or their representatives can sue for-profit social media platforms with over a million users for damages if they suffer bodily harm due to a platform’s violation of this duty of care.
– Sponsors claim the bill does not infringe on First Amendment rights, as it exempts direct searches and chronological feeds, and enforcement cannot be based on user viewpoints.
– Critics, including the Electronic Frontier Foundation, warn that platforms may over-censor content to avoid liability, potentially removing even helpful resources to prevent harm.
A new bipartisan legislative effort aims to fundamentally alter the legal landscape for social media companies by making them legally accountable for the real-world consequences of their recommendation algorithms. Senators John Curtis, a Republican from Utah, and Mark Kelly, a Democrat from Arizona, have introduced the Algorithm Accountability Act. This proposed legislation carves out a significant exception to the liability shield provided by Section 230 of the Communications Decency Act, the foundational law that has long protected online platforms from being held responsible for content posted by their users.
The core of the bill establishes a legal “duty of care” for social media platforms. It would mandate that for-profit platforms with over one million users exercise reasonable care in the design and operation of their recommendation-based algorithms to prevent foreseeable bodily injury or death. Should a platform fail in this duty and its algorithm promotes content that leads to physical harm, the company could be sued for damages by the victims or their families. This legal approach mirrors that of the stalled Kids Online Safety Act (KOSA), signaling a growing legislative focus on holding tech giants responsible for systemic harms.
The bill’s sponsors are proactively addressing anticipated criticisms, particularly concerning free speech. They insist the legislation would not infringe upon First Amendment rights. The law would not apply to content that users directly search for, nor would it restrict feeds that are displayed in a simple chronological order. Furthermore, enforcement of the law could not be based on the specific viewpoints expressed in user content.
Senator Curtis has been a vocal critic of how algorithms can contribute to real-world violence. He has pointed to the tragic shooting of conservative activist Charlie Kirk in Utah, suggesting that online platforms and their engagement-driven algorithms played a role in radicalizing the alleged gunman. Alongside Senator Kelly, whose wife Gabby Giffords survived an assassination attempt, the lawmakers have framed their proposal as a necessary step to reduce dangerous political tensions.
This legislative push follows high-profile legal challenges that have thus far been stymied by Section 230. Earlier this year, a lawsuit alleged that YouTube and Meta’s algorithms radicalized a mass shooter by recommending hateful content. A court dismissed the case, citing both Section 230 and the First Amendment. The Algorithm Accountability Act could potentially open the door to a new wave of lawsuits against tech companies for harms ranging from drug abuse to self-injury, even in instances where the underlying speech is legally protected. The mere loss of Section 230 immunity could force platforms into costly and protracted legal battles over their content moderation decisions.
However, digital rights organizations like the Electronic Frontier Foundation (EFF), which has opposed KOSA and other Section 230 reforms, have raised alarms. They warn that despite the bill’s assurances, platforms would be financially incentivized to over-censor. To avoid any potential legal risk, companies might preemptively remove or suppress vast amounts of content, including educational resources and support groups designed to mitigate the very harms the legislation seeks to address.
(Source: The Verge)

