AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Anthropic Backs California’s AI Safety Bill SB 53

▼ Summary

Anthropic has endorsed California’s SB 53, a bill that would impose first-in-the-nation transparency requirements on major AI developers, despite opposition from tech groups.
– If passed, SB 53 would require frontier AI developers to create safety frameworks, release public safety reports, and provide whistleblower protections for employees.
– The bill specifically targets catastrophic risks, such as AI models being used to assist in creating biological weapons or conducting cyberattacks.
– SB 53 has faced significant pushback from Silicon Valley and the Trump administration, who argue it could hinder innovation and violate the Commerce Clause.
– The bill has been amended to remove third-party audit requirements and is considered more modest than previous AI safety legislation, with experts believing it has a good chance of becoming law.

Anthropic has officially thrown its support behind California’s proposed SB 53, a groundbreaking piece of legislation that would introduce unprecedented transparency obligations for major artificial intelligence developers. This endorsement represents a significant boost for the bill, especially as influential tech organizations actively campaign against its passage.

In a recent statement, Anthropic acknowledged that while federal oversight might be the ideal approach to AI safety, the rapid pace of innovation demands more immediate action. The company emphasized that governance is inevitable, the real choice lies between proactive planning and reactive scrambling. SB 53, they argue, offers a clear and thoughtful framework for the former.

Should it become law, the bill would mandate that leading AI firms, including OpenAI, Google, and xAI, establish comprehensive safety protocols and publicly disclose security assessments before launching advanced AI systems. It also includes protections for whistleblowers who raise safety issues, encouraging accountability from within the industry.

The legislation specifically targets what it classifies as “catastrophic risks”, scenarios involving mass casualties or economic damage exceeding one billion dollars. Rather than addressing more immediate concerns like deepfakes, SB 53 concentrates on preventing AI from being misused in areas such as biological weapon development or large-scale cyberattacks.

California’s Senate has already given preliminary approval to an earlier draft of the bill, though a final vote is still required before it reaches Governor Gavin Newsom. The governor has not yet taken a public stance on SB 53, though he previously vetoed a related AI safety proposal from the same author.

Opposition to state-level AI regulation has been vocal, particularly from Silicon Valley investors and some federal voices who argue that such measures could stifle innovation and hinder U.S. competitiveness. Critics also contend that AI governance should be handled at the national level to avoid a confusing patchwork of state laws.

Despite these objections, Anthropic co-founder Jack Clark maintains that the industry cannot afford to wait for federal consensus. He described SB 53 as a “solid blueprint” for responsible AI development, one that other jurisdictions may eventually emulate.

OpenAI has expressed concerns about regulatory overreach potentially driving startups out of California, though it did not reference SB 53 directly. Some policy experts, however, view the bill as a more measured and technically informed approach compared to earlier legislative efforts.

Notably, the bill’s requirements align closely with existing internal practices at many leading AI labs, which already produce voluntary safety reports. What SB 53 would do is turn these voluntary measures into legal obligations, backed by financial penalties for non-compliance.

A recent amendment removed a provision requiring third-party audits, a concession to industry concerns about operational burden. This change may improve the bill’s chances of passage while still preserving its core objective: ensuring that powerful AI systems are developed and deployed safely.

(Source: TechCrunch)

Topics

sb 53 98% AI Transparency 95% ai safety 93% anthropic endorsement 90% federal vs state 88% tech opposition 87% ai companies 86% whistleblower protections 85% catastrophic risks 84% innovation concerns 82%