AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Pentagon, Anthropic Nearly Aligned Despite Trump’s Claim

▼ Summary

– Anthropic filed court declarations arguing the Pentagon’s claim that it poses a national security risk is based on technical misunderstandings and issues never raised during negotiations.
– The company disputes the government’s assertion that it demanded an operational approval role over military uses, stating this claim is false and was never part of discussions.
– Anthropic highlights a contradiction where a Pentagon official emailed the CEO stating the sides were “very close” on key issues just after formally labeling the company a security threat.
– A technical declaration states Anthropic cannot access or disable its AI once deployed in secured government systems, refuting claims of a remote “kill switch” or operational veto.
– The lawsuit frames the government’s supply-chain risk designation as unconstitutional retaliation for Anthropic’s public AI safety views, which the government denies, calling it a national security decision.

In a significant legal filing, Anthropic has formally contested the Pentagon’s designation of the company as a national security risk, submitting sworn declarations that challenge the government’s technical and factual assertions. The documents, filed in a California federal court, argue the Defense Department’s case relies on misunderstandings and claims never discussed during prior negotiations. This dispute sets the stage for a crucial hearing next week, highlighting a profound clash between a leading AI firm and the U.S. military over the boundaries of technology, policy, and constitutional rights.

The declarations come from Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector. Heck, a former National Security Council official, directly participated in the late February meeting where CEO Dario Amodei met with Defense Secretary Pete Hegseth. In her statement, Heck identifies what she calls a central falsehood in the government’s position: the claim that Anthropic sought an approval role over military operations. She states unequivocally that no employee ever made such a demand during negotiations.

Heck further notes that the Pentagon’s expressed fear, that Anthropic could disable or alter its AI technology during a military operation, was never brought up in discussions. This concern, she asserts, appeared for the first time in legal filings, leaving the company with no prior chance to address it. Her declaration includes a potentially revealing email from Pentagon Under Secretary Emil Michael to Amodei on March 4, sent just after the supply-chain risk designation was finalized. In it, Michael stated the two sides were “very close” on the very issues now cited as evidence of a national security threat: Anthropic’s positions on autonomous weapons and mass surveillance.

This correspondence contrasts sharply with Michael’s public statements in the following days, which declared negotiations were dead. Heck’s implication is clear: if the company’s stance on these issues truly constituted an unacceptable risk, why would a senior Pentagon official suggest an agreement was nearly within reach?

Ramasamy’s declaration provides a technical rebuttal. With a background managing AI deployments for government clients at Amazon Web Services, he addresses the government’s theoretical scenario where Anthropic could interfere with operations. He explains that once the Claude AI model is deployed within a secured, air-gapped government system managed by a third-party contractor, Anthropic has zero access. There is no remote kill switch, backdoor, or method to push unauthorized updates. Any modification would require the Pentagon’s direct approval and action to install, making the idea of an “operational veto” a technical impossibility.

Ramasamy also counters concerns about foreign national employees, noting that relevant personnel have undergone full U. S. government security clearance vetting. He adds that, to his knowledge, Anthropic is the sole AI company where cleared personnel built the AI models intended for classified environments.

The core of Anthropic’s lawsuit is that the unprecedented supply-chain risk designation constitutes government retaliation for the company’s public views on AI safety, violating First Amendment protections. The Pentagon, in its own extensive filing, rejects this framing entirely. It characterizes Anthropic’s refusal to permit all lawful military uses of its technology as a business decision, not protected speech, and maintains the designation was a necessary national security measure, not a punishment for the company’s ethical stance.

(Source: TechCrunch)

Topics

National Security 95% legal dispute 93% supply chain risk 90% government negotiations 88% technical misunderstandings 87% ai safety 85% public statements 84% first amendment 83% court filings 82% corporate policy 81%