Anthropic Sues Pentagon Over Supply Chain Risk Label

▼ Summary
– Anthropic has filed a federal lawsuit against the U.S. Department of Defense, challenging its designation as a “supply-chain risk” and seeking to reverse the decision.
– The company argues the designation is legally unsound and an unconstitutional punishment for its protected speech, specifically its limits on military AI use.
– This designation threatens hundreds of millions in annual government revenue for Anthropic and risks business from other companies that sell Claude-integrated services to agencies.
– The Pentagon’s action followed a public dispute after it demanded suppliers agree their tech could be used for any lawful purpose, which Anthropic resisted over concerns about autonomous weapons and surveillance.
– Legal experts note Anthropic faces a difficult case but its best chance may be proving it was singled out, especially after rival OpenAI secured a new Pentagon contract with specific usage safeguards.
The artificial intelligence firm Anthropic has initiated a federal lawsuit against the U.S. Department of Defense, contesting its recent classification as a supply-chain risk. This legal action, filed in a California court, seeks to overturn the designation and prevent its enforcement, arguing it constitutes an unconstitutional punishment for the company’s policy positions. The dispute centers on the military’s desired use of Anthropic’s Claude AI models and the company’s own restrictions on applications like autonomous weapons systems.
Anthropic CEO Dario Amodei stated the company felt compelled to challenge the decision in court, believing it lacks a solid legal foundation. The lawsuit contends that the government is leveraging its power to retaliate against protected speech, specifically Anthropic’s publicly stated ethical guidelines. As part of its filing, the company is urgently seeking a temporary restraining order to maintain its existing government sales, requesting a swift judicial hearing on the matter.
This designation carries significant financial stakes. Anthropic faces the potential loss of hundreds of millions in annual revenue from Pentagon contracts and other federal business. Furthermore, software companies that integrate Claude into their own government-facing services may be forced to seek alternatives, amplifying the commercial impact. Amodei clarified that the risk label should only affect the direct use of Claude in military contracts, not the general use of its technology by defense contractors.
The Department of Defense declined to comment on the pending litigation. A White House spokesperson emphasized that the military answers to the Constitution, not to a technology company’s terms of service, and affirmed the administration’s commitment to providing service members with necessary tools without being constrained by corporate ideology.
Legal experts specializing in government contracts note that Anthropic faces a steep challenge. The regulations granting the Pentagon authority to declare supply-chain risks offer limited avenues for appeal. The government typically holds broad discretion in setting contract parameters and can determine that a specific product hinders its operational mission if used by suppliers. Anthropic’s most viable legal strategy may be to prove it was unfairly singled out, especially following news that its competitor, OpenAI, secured a new contract with the Defense Department shortly after Anthropic’s designation.
OpenAI stated its agreement includes safeguards to prevent uses like mass domestic surveillance or autonomous weapons direction. The company expressed opposition to the action against Anthropic and said it was unclear why its rival could not negotiate a similar arrangement.
The conflict escalated earlier this year when Defense Secretary Pete Hegseth, a proponent of military AI adoption, required several AI suppliers to grant broad usage rights for any lawful purpose. Anthropic, which provides AI tools for some of the military’s most sensitive applications, resisted this blanket approval. The company maintains its technology is not suited for mass surveillance or fully autonomous weapons, while Hegseth has framed the issue as an unacceptable corporate veto over national defense judgments.
(Source: Wired)





