Artificial IntelligenceBusinessNewswireTechnology

Anthropic CEO Defiant as Pentagon Deadline Nears

Originally published on: February 27, 2026
▼ Summary

– Anthropic’s CEO has refused the Pentagon’s demand for unrestricted military access to its AI systems, citing ethical and technological concerns.
– The company specifically objects to two potential uses: mass surveillance of Americans and fully autonomous weapons without human oversight.
– The Pentagon has issued an ultimatum, threatening to designate Anthropic as a supply chain risk or invoke the Defense Production Act to force compliance.
– Amodei highlighted the contradiction in the Pentagon’s threats, which simultaneously label Anthropic a security risk and its technology as essential to national security.
– Anthropic is willing to continue its military work with specific safeguards but is prepared to transition its services to another provider if necessary.

The CEO of Anthropic has taken a firm public stance against providing the Pentagon with unrestricted access to its advanced artificial intelligence systems, setting the stage for a high-stakes confrontation. Dario Amodei stated he “cannot in good conscience” comply with the military’s request, arguing that certain applications conflict with democratic principles and exceed current technological safety limits. His declaration arrives just hours before a critical deadline imposed by Defense Secretary Pete Hegseth, who has demanded the company’s full cooperation.

Amodei’s statement identifies two specific use cases his company finds unacceptable: the mass surveillance of American citizens and the deployment of fully autonomous weapons systems without human oversight. While acknowledging that military strategy is the Pentagon’s domain, Amodei contends that private firms have a responsibility to establish ethical boundaries for their technology. The Department of Defense maintains a contrasting view, asserting its right to utilize AI models for all lawful purposes without restrictions dictated by a contractor.

The Pentagon’s pressure campaign involves a dual-threat strategy. Officials have suggested they could designate Anthropic as a supply chain risk, a label typically applied to foreign adversaries, or invoke the Defense Production Act (DPA) to compel compliance. Amodei highlighted the paradoxical nature of these options, noting one approach brands his company a security threat while the other declares its Claude AI essential to national security. He expressed a hope for reconsideration, given the substantial value Anthropic’s technology offers to the armed forces.

Despite the standoff, Amodei emphasized a desire for an orderly resolution. Anthropic is currently the sole frontier AI lab with systems prepared for classified military work, though reports indicate the Pentagon is preparing alternatives like xAI. The CEO affirmed his company’s strong preference to continue serving the Department of Defense, but only with the requested safeguards in place. Should the Pentagon decide to terminate the relationship, Amodei pledged to facilitate a smooth transition to another provider to prevent disruption to military operations.

The subtext of Amodei’s message is a clear willingness to walk away from the contract rather than compromise on his stated principles. He positions the disagreement as a matter of corporate ethics and technological safety, framing a potential separation as a straightforward business decision rather than an act of defiance. The outcome of this dispute could establish a significant precedent for how AI companies engage with government and military clients on matters of ethical deployment.

(Source: TechCrunch)

Topics

ai military use 95% corporate ethics 90% government pressure 85% mass surveillance 80% autonomous weapons 80% supply chain risk 75% defense production act 75% National Security 70% ai safeguards 70% democratic values 65%