Anthropic CEO Fires Back at Trump Officials Over AI Fear-Mongering Claims

▼ Summary
– Anthropic CEO Dario Amodei issued a statement to clarify the company’s alignment with Trump administration AI policy and counter inaccurate claims about its stances.
– Amodei emphasized that Anthropic’s principle is for AI to be a force for human progress, focusing on useful products, honest risk discussions, and collaboration with serious stakeholders.
– The statement responds to criticism from AI leaders and Trump administration officials who accused Anthropic of fear-mongering to damage the industry and impose left-leaning regulations.
– Amodei defended Anthropic’s cooperation with the federal government, citing work with the Department of Defense, support for Trump’s AI Action Plan, and efforts to expand energy provision for AI development.
– Anthropic has faced backlash for supporting state-level AI safety measures and opposing a ban on such regulations, arguing it protects startups and avoids harming the ecosystem while restricting services to China-controlled companies.
Anthropic CEO Dario Amodei issued a clarifying statement this week to address what he described as a surge of misleading characterizations regarding the company’s position on artificial intelligence policy. Amodei emphasized that Anthropic is built on a simple principle: AI should be a force for human progress, not peril. This foundational belief drives the company to create genuinely useful products, discuss both risks and benefits transparently, and collaborate with any party earnestly focused on responsible AI development.
The CEO’s remarks follow pointed criticism from several AI leaders and former Trump administration officials, including AI czar David Sacks and White House senior policy advisor Sriram Krishnan. These critics have accused Anthropic of employing fear-based tactics to undermine the industry. The controversy ignited after Anthropic co-founder Jack Clark shared his perspective that AI possesses powerful and somewhat unpredictable qualities, rather than behaving as a reliably controllable tool.
Sacks responded forcefully, alleging that Anthropic is executing a “sophisticated regulatory capture strategy based on fear-mongering,” which he claims has fueled a regulatory frenzy harmful to startups. California Senator Scott Wiener, who authored the AI safety bill SB 53, came to Anthropic’s defense, criticizing what he called an effort to block state-level AI protections without establishing federal safeguards. Sacks later intensified his accusations, suggesting Anthropic was collaborating with Wiener to impose a left-leaning regulatory framework.
Additional industry voices, such as Groq COO Sunny Madra, joined the fray, contending that Anthropic’s advocacy for basic AI safety measures is generating chaos across the sector and stifling innovation.
In his detailed response, Amodei argued that addressing AI’s societal impact should prioritize policy over partisan politics. He expressed his belief that all stakeholders share the goal of maintaining America’s leadership in AI while developing technology that benefits its citizens. To demonstrate alignment with the Trump administration, Amodei highlighted several collaborative initiatives. These include providing Anthropic’s AI model Claude to federal agencies and entering into a $200 million agreement with the Department of Defense, which Amodei referred to using the term “Department of War,” echoing President Trump’s preferred language. He also noted Anthropic’s public support for the Trump AI Action Plan and efforts to expand energy resources to secure a competitive edge in the global AI race.
Despite these cooperative gestures, Anthropic has faced backlash from industry peers for diverging from the Silicon Valley consensus on specific policy matters. The company attracted early criticism when it opposed a proposed ten-year moratorium on state-level AI regulations, a measure that faced bipartisan resistance. Many tech leaders, including those at OpenAI, argue that state regulations could decelerate industry growth and cede advantage to China. Amodei countered that the greater risk lies in the U.S. continuing to supply advanced Nvidia AI chips to Chinese data centers. He noted that Anthropic voluntarily restricts sales of its AI services to China-controlled companies, despite the financial impact.
“There are products we will not build and risks we will not take, even if they would make money,” Amodei stated.
Anthropic further distanced itself from certain industry power players by endorsing California’s SB 53, a moderate safety bill requiring major AI developers to disclose frontier model safety protocols. Amodei pointed out that the legislation includes an exemption for companies with annual gross revenue under $500 million, shielding most startups from additional compliance burdens.
Addressing Sacks’s claim that Anthropic aims to harm the startup ecosystem, Amodei wrote, “Startups are among our most important customers. We work with tens of thousands of startups and partner with hundreds of accelerators and VCs. Claude is powering an entirely new generation of AI-native companies. Damaging that ecosystem makes no sense for us.”
Amodei also shared that Anthropic has grown from a $1 billion to a $7 billion run-rate over the past nine months while maintaining what he characterized as thoughtful and responsible AI deployment. He reaffirmed the company’s commitment to constructive policy engagement, stating, “When we agree, we say so. When we don’t, we propose an alternative for consideration. We are going to keep being honest and straightforward, and will stand up for the policies we believe are right. The stakes of this technology are too great for us to do otherwise.”
(Source: TechCrunch)

