Pentagon Flags Anthropic as Supply Chain Risk

▼ Summary
– President Trump ordered federal agencies to cease using all Anthropic products and to phase them out within six months, declaring the company unwelcome as a federal contractor.
– The Department of Defense subsequently designated Anthropic a supply-chain risk to national security, barring any military contractor or partner from doing business with the company.
– The dispute originated from Anthropic’s refusal to allow its AI models to be used for mass domestic surveillance or fully autonomous weapons, a stance its CEO reiterated.
– OpenAI publicly supported Anthropic’s ethical stance but then moved to secure a deal with the Pentagon, claiming it upheld the same prohibitions on surveillance and autonomous weapons.
– Other major AI firms like Google, which also holds Defense Department contracts, have not officially commented, though some Google employees expressed support for Anthropic.
The federal government has severed its relationship with leading AI firm Anthropic, following a public clash over ethical boundaries for military applications. President Trump directed all federal agencies to cease using Anthropic products, granting a six-month phase-out period but making clear the company is no longer a welcome contractor. This decisive move stems from Anthropic’s refusal to allow its AI models to be used for mass domestic surveillance or fully autonomous weapons systems, restrictions the Pentagon deemed unacceptable.
In a post on Truth Social, the President stated, “We don’t need it, we don’t want it, and will not do business with them again.” While the initial presidential directive did not label the company a national security risk, Defense Secretary Pete Hegseth followed through on that threat. He announced the Department of War is designating Anthropic a Supply-Chain Risk to National Security. This designation effectively bars any U.S. military contractor, supplier, or partner from conducting commercial activity with the AI company.
Anthropic’s CEO, Dario Amodei, stood firm on the company’s position, expressing a desire to continue serving the Defense Department but only with the two requested safeguards in place. He pledged to ensure a smooth transition to another provider to avoid disrupting military operations. The company’s stance has garnered support from within the competitive AI industry. OpenAI’s Sam Altman reportedly sent a memo to staff affirming shared “red lines,” stating OpenAI would also reject defense contracts involving unlawful uses or those unsuited to cloud deployments, specifically naming domestic surveillance and autonomous offensive weapons.
Interestingly, shortly after the administration’s order against Anthropic, OpenAI announced a new deal with the Pentagon. Altman asserted this agreement preserves the same core principles Anthropic defended, prohibiting the very applications that caused the dispute. Reports indicate discussions between OpenAI and the government began earlier in the week. Ilya Sutskever, an OpenAI co-founder who now leads a rival firm, commented on the situation, noting the significance of competitors putting aside differences on critical ethical issues.
The situation remains fluid, with industry observers anticipating further developments. The contracts originally awarded to Anthropic, OpenAI, and Google last July are now in a state of flux. While some Google employees have voiced support for Anthropic’s ethical stand, the company itself has not issued an official statement. This high-stakes conflict underscores the growing tension between rapid technological advancement in artificial intelligence and the establishment of firm ethical guardrails for its most powerful applications.
(Source: TechCrunch)





