AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

Anthropic Denies AI Sabotage Capability in Wartime

Originally published on: March 21, 2026
▼ Summary

– Anthropic states it cannot manipulate, disable, or alter its AI model Claude once the US military is using it, rejecting accusations it could jeopardize operations.
– The Pentagon has designated Anthropic a supply-chain risk, banning Defense Department use of its software, including through contractors.
– Anthropic is challenging the ban in court, seeking an emergency reversal, as customers cancel deals and a hearing is scheduled for late March.
– The government argues the ban is necessary, fearing Anthropic could disrupt critical military operations by cutting access or pushing harmful updates.
– Anthropic asserts it has no “kill switch,” requires government approval for updates, and does not seek veto power over military tactical decisions.

The ongoing legal dispute between Anthropic and the U.S. Department of Defense centers on a fundamental question of control and trust in military artificial intelligence systems. In recent court filings, the AI company has forcefully denied possessing any capability to sabotage its Claude model if used in wartime operations, directly countering government claims that labeled the firm a supply-chain risk. Anthropic executives assert they have no “back door” or remote “kill switch” to disable or alter the AI’s functionality once it is deployed within military systems. This legal battle has led to a formal ban on the Pentagon’s use of Claude, a move Anthropic is challenging as unconstitutional.

According to a statement from Thiyagu Ramasamy, who leads Anthropic’s public sector division, the company lacks the necessary access to influence the technology during active missions. He emphasized that Anthropic personnel cannot log into Defense Department systems to modify or disable the AI models, stating the architecture simply does not permit such intervention. The company further clarified that any software updates would require explicit approval from both the government and its cloud infrastructure provider, effectively preventing unilateral changes.

This conflict escalated when Defense Secretary Pete Hegseth formally designated Anthropic as a risk, prohibiting the Department of Defense and its contractors from utilizing Claude software. The government’s core concern, as outlined in its legal arguments, is the potential for the AI lab to jeopardize critical military systems at pivotal moments. Officials worry that Anthropic could disrupt operations by cutting off access or deploying harmful updates if it disapproves of specific military applications. These applications have reportedly included data analysis, memo writing, and assistance in generating battle plans.

In response, Anthropic has filed two lawsuits and is seeking an emergency court order to reverse the ban, though the legal process is ongoing with a hearing scheduled for late March. The company maintains it never sought veto power over military decisions. Sarah Heck, Anthropic’s head of policy, referenced a proposed contract from early March that explicitly stated the company understood its license did not grant any right to control or veto lawful operational decision-making by the Pentagon.

Heck also indicated the company was prepared to accept contractual language addressing its ethical concerns, particularly regarding the use of Claude for autonomous lethal strikes without human oversight. However, negotiations between the parties ultimately collapsed, leading to the current impasse. Meanwhile, the Defense Department has stated it is implementing additional measures to mitigate the perceived supply chain risk. These steps involve collaborating with third-party cloud service providers to ensure Anthropic’s leadership cannot make any unilateral alterations to the Claude systems already in use.

(Source: Wired)

Topics

ai model control 95% military ai use 90% supply chain risk 88% National Security 85% legal challenges 85% government contracts 80% ai safety 78% cloud provider role 75% Data Privacy 72% autonomous weapons 70%