Artificial IntelligenceBigTech CompaniesNewswireTechnologyWhat's Buzzing

Judge Blocks Pentagon Ban on Anthropic

Originally published on: March 27, 2026
▼ Summary

– A judge granted Anthropic a preliminary injunction, temporarily reversing its government blacklisting while its lawsuit proceeds.
– The lawsuit centers on whether the Department of War illegally retaliated against Anthropic for its public stance and press criticism.
– The conflict began when Anthropic refused to allow its AI, Claude, to be used for domestic mass surveillance or lethal autonomous weapons.
– The government’s “supply chain risk” designation is rare for a US company and has significantly threatened Anthropic’s business and partnerships.
– The judge questioned the government’s broad restrictions and the evidence for its claim that Anthropic could sabotage its own technology after sale.

A federal judge has issued a preliminary injunction against the Pentagon, temporarily blocking its ban on contracting with the AI company Anthropic. This ruling marks a significant development in a high-stakes legal battle over First Amendment rights and the government’s authority to designate companies as supply chain risks. The injunction will take effect in seven days, halting the enforcement of the ban while the lawsuit proceeds toward a final verdict, which could take months.

Judge Rita F. Lin of the Northern District of California found that the Department of War likely violated the law by retaliating against Anthropic for its public statements. In her order, she noted the department’s records show it designated Anthropic a risk due to its “hostile manner through the press.” She characterized this as “classic illegal First Amendment retaliation.” The dispute originated from a January 9 memo by Defense Secretary Pete Hegseth, which mandated that all AI service contracts include “any lawful use” language within 180 days. This policy would apply to existing agreements with major AI firms, including Anthropic, OpenAI, xAI, and Google.

Anthropic’s negotiations with the Pentagon broke down over the company’s refusal to allow its Claude AI system to be used for two specific purposes: domestic mass surveillance and lethal autonomous weapons. Following the impasse, Secretary Hegseth posted on social media that no military contractor or partner could conduct business with Anthropic, a statement the company argued caused widespread confusion and severe commercial harm. The Pentagon later formally labeled Anthropic a supply chain risk, a designation typically applied to foreign companies, not U. S. firms. Anthropic’s lawsuit contends this action was punitive and unconstitutional.

During a hearing, Judge Lin framed the core conflict but declined to rule on its substance. “Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance,” she stated. “The Department of War is saying that military commanders have to decide what is safe for its AI to do.” She clarified that her role was not to decide who was correct in that debate, but rather to determine if the government violated the law in its actions against the company. She emphasized that the Pentagon remains free to stop using Claude and seek a different AI vendor.

The supply chain risk designation has had immediate and severe consequences for Anthropic. Court filings reveal that numerous partners have expressed confusion and concern about continuing their relationships. Dozens of companies have contacted Anthropic regarding their rights to terminate contracts. The company alleges that, depending on how broadly the government enforces the ban, it risks losing revenue ranging from hundreds of millions to multiple billions of dollars. An Anthropic lawyer argued during the hearing that the company is suffering “irreparable injury” from the directive.

Judge Lin questioned the Pentagon’s rationale and the scope of its restrictions. She pressed a department representative on whether a contractor providing a non-technical service, like toilet paper, would be terminated for using Anthropic for unrelated work. The representative confirmed that for “non-DoW work,” termination would not occur. However, when asked about a contractor providing IT services not related to national security systems, the representative did not give a definitive answer. The judge also scrutinized a Pentagon court filing that suggested Anthropic could theoretically sabotage its technology during military operations if it felt its “red lines” were crossed. She asked what evidence showed Anthropic had ongoing access or control over Claude after delivering it to the government to enable such acts.

In a statement, Anthropic spokesperson Danielle Cohen expressed gratitude for the court’s swift action. “We’re pleased they agree Anthropic is likely to succeed on the merits,” she said. “While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government.” The case has sparked a broader debate about the limits of government power over contractors. Judge Lin referenced an amicus brief that used the term “attempted corporate murder,” commenting, “I don’t know if it’s ‘murder,’ but it looks like an attempt to cripple Anthropic.” The preliminary injunction now pauses that effort as the legal process continues.

(Source: The Verge)

Topics

first amendment retaliation 95% supply chain risk 93% ai military use 92% government contracting 90% judicial injunction 88% corporate speech rights 87% National Security 86% AI ethics 85% contractor termination 83% legal precedent 80%