Artificial IntelligenceBigTech CompaniesNewswireTechnologyWhat's Buzzing

OpenAI Supports Bill Limiting AI Mass Death Liability

▼ Summary

– OpenAI is supporting an Illinois bill (SB 3444) that would protect AI developers from liability for severe “critical harms” caused by their models, such as mass casualties or billion-dollar damages.
– The bill represents a strategic shift for OpenAI, which previously opposed liability measures, and experts view it as more extreme than its past legislative endorsements.
– The liability shield applies to “frontier models” (costing over $100 million to train) if the developer did not act intentionally or recklessly and has published required safety reports.
– OpenAI argues the bill helps reduce serious risks while avoiding inconsistent state laws, aligning with its push for clear national standards to maintain U.S. innovation leadership.
– A policy expert notes the bill faces low odds of passing in Illinois, citing strong public opposition to exempting AI companies from liability.

In a notable strategic pivot, OpenAI has endorsed proposed legislation in Illinois that would grant significant legal protections to developers of the most advanced artificial intelligence systems. The bill, SB 3444, aims to shield companies from liability for what it terms critical harms, including mass casualty events or catastrophic property damage, provided certain conditions are met. This move signals a shift from the company’s previous defensive posture against liability-focused bills to actively supporting a measure that could establish a new industry standard.

The proposed law offers a liability shield for developers of frontier AI models, defined as systems trained using over $100 million in computational resources. This threshold would encompass major players like OpenAI, Google, and Meta. To qualify for protection, a company must not have acted with intentional or reckless disregard for safety and must have publicly posted detailed safety, security, and transparency reports. OpenAI frames its support as a balanced approach to managing extreme risks while fostering innovation and accessibility.

“We support approaches like this because they focus on what matters most: Reducing the risk of serious harm from the most advanced AI systems while still allowing this technology to get into the hands of the people and businesses of Illinois,” stated OpenAI spokesperson Jamie Radice. The company also emphasized the bill’s potential to prevent a confusing patchwork of state-by-state rules and to encourage clearer national standards.

The legislation specifically outlines scenarios constituting a critical harm. These include a malicious actor using an AI model to engineer a chemical, biological, radiological, or nuclear weapon. It also covers situations where an AI system autonomously engages in conduct that would be a criminal offense if performed by a human, resulting in mass death or over $1 billion in damages. Under the bill, the originating AI lab would not face liability for such outcomes absent proof of intentional or reckless misconduct.

Currently, no federal or state law explicitly determines the liability of AI developers for catastrophic misuses of their technology. As firms release increasingly powerful models, this legal gray area becomes more pressing. In testimony supporting the bill, Caitlin Niedermeyer of OpenAI’s Global Affairs team advocated for a cohesive federal framework for AI regulation. She argued that state laws can be constructive if they align with eventual federal systems, a view that echoes broader Silicon Valley concerns about maintaining US leadership in innovation.

“At OpenAI, we believe the North Star for frontier regulation should be the safe deployment of the most advanced models in a way that also preserves US leadership in innovation,” Niedermeyer stated.

However, the bill faces substantial political hurdles. Illinois has a history of assertive technology regulation, and public sentiment appears strongly opposed to the core concept. Scott Wisor, policy director for the Secure AI project, cited polling showing 90 percent opposition in Illinois to granting AI companies liability exemptions. “There’s no reason existing AI companies should be facing reduced liability,” Wisor remarked, casting doubt on the legislation’s prospects for passage.

(Source: Wired)

Topics

ai liability shield 98% illinois bill sb 3444 96% openai legislative strategy 94% frontier ai models 92% critical harms definition 90% ai safety reports 88% federal ai regulation 86% state vs federal laws 84% ai industry concerns 82% public opposition 80%