AI & TechArtificial IntelligenceBusinessNewswireTechnology

Judge Blocks Anthropic Supply Chain Risk Designation

Originally published on: March 27, 2026
▼ Summary

– A federal judge granted Anthropic a preliminary injunction, preventing the Pentagon from labeling it a supply-chain risk.
– The judge ruled the designation was likely unlawful and arbitrary, citing no basis to infer Anthropic might become a saboteur.
– The Department of Defense had used Anthropic’s AI tools but began halting their use, citing objectionable usage restrictions.
– The injunction restores the situation to before the directives, but agencies can still cancel deals on other lawful grounds.
– The immediate impact is unclear, and a separate lawsuit on different legal grounds is still pending in appeals court.

A federal judge has granted a major legal victory to Anthropic, issuing a preliminary injunction that prevents the U.S. Department of Defense from officially labeling the AI firm a supply-chain risk. The decision represents a symbolic blow to the Pentagon and offers a crucial reprieve for the generative AI company as it fights to protect its commercial relationships and public standing. Judge Rita Lin of the U.S. District Court in San Francisco ruled that the government’s designation appeared unlawful.

In her order, Judge Lin stated the Pentagon’s designation of Anthropic as a risk was likely “both contrary to law and arbitrary and capricious.” She specifically challenged the department’s rationale, writing that officials provided “no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur.” The Department of Defense, which internally uses the historical name Department of War, had been a significant user of Anthropic’s Claude AI for tasks like drafting sensitive documents and analyzing classified information.

The conflict escalated earlier this year when the Pentagon began terminating its use of Claude. Officials pointed to multiple instances where Anthropic allegedly imposed or attempted to enforce usage restrictions on its technology that the current administration deemed unnecessary. This led to a series of directives, including the formal risk designation, which effectively started to freeze Claude’s use across federal agencies and damaged Anthropic’s business.

Anthropic responded with lawsuits, arguing the sanctions were unconstitutional. During a hearing this week, Judge Lin remarked that the government’s actions appeared designed to illegally “cripple” and “punish” the company. Her Thursday ruling temporarily restores the status quo to February 27, the date just before the contested directives were issued.

Importantly, the injunction does not force the Pentagon to use Anthropic’s products. Judge Lin clarified that her order “does not require the Department of War to use Anthropic’s products or services and does not prevent the Department of War from transitioning to other artificial intelligence providers.” The ruling simply prohibits the government from using the supply-chain risk designation as justification for its actions. Agencies remain free to cancel contracts or instruct contractors to stop using Claude, provided they cite other lawful reasons.

The practical effect is uncertain, as the order does not take effect for seven days. Furthermore, a separate lawsuit filed by Anthropic is still pending before a federal appeals court in Washington, D. C. That case challenges a different legal authority used to bar the company from providing software to the military.

Despite these ongoing challenges, the preliminary injunction provides Anthropic with a powerful tool to reassure nervous customers and partners. The company can now argue that a federal judge has found the government’s primary rationale legally suspect. While Judge Lin has not set a timeline for a final ruling, this decision allows Anthropic to continue its court battle from a strengthened position.

(Source: Wired)

Topics

legal injunction 98% supply chain risk 97% government ai contracts 96% judicial ruling 95% anthropic reputation 94% usage restrictions 93% federal lawsuits 92% department of defense 90% claude ai 89% contract cancellation 87%