Judge Criticizes Pentagon’s Move Against Anthropic

▼ Summary
– A U.S. judge stated the Pentagon’s designation of Anthropic as a supply-chain risk appears to be illegal retaliation for the company’s public pushback on military AI use.
– Anthropic is seeking a court order to pause this designation, hoping to reassure concerned customers while the judge considers the company’s likelihood of winning the broader case.
– The Department of Defense argues its risk assessment is valid, expressing concern that Anthropic might manipulate its AI software to not perform as the military expects.
– Judge Lin found it troubling that the broad restrictions on using Anthropic’s AI did not seem specifically tailored to the stated national security concerns.
– The Pentagon is working to replace Anthropic’s technology with alternatives from other AI firms and has imposed measures to prevent tampering during the transition.
During a federal court hearing this week, a U.S. district judge sharply questioned the Pentagon’s decision to label AI firm Anthropic a supply-chain security risk. Judge Rita Lin suggested the designation appeared to be illegal retaliation against the company for its public efforts to restrict military use of its technology. She stated the action seemed designed to cripple the business and could represent a violation of Anthropic’s First Amendment rights.
The hearing stemmed from one of two lawsuits Anthropic has filed against the government. The company alleges the security risk designation, enacted by the Trump administration, constitutes unlawful punishment. This label was applied after Anthropic sought contractual limits on how the U. S. military could deploy its AI systems. Anthropic is now seeking a preliminary injunction to pause the designation, hoping to reassure nervous customers while the legal battle proceeds. Judge Lin can only grant this pause if she finds the company is likely to prevail in the overall case, with a ruling expected shortly.
This conflict has ignited a wider debate about the ethical deployment of artificial intelligence in defense and the extent to which tech companies must acquiesce to government demands regarding their own products. The Department of Defense contends it followed proper procedures, determining that Anthropic’s AI tools could not be trusted to perform reliably in critical situations. The government has urged the court not to second-guess its national security assessment.
At the hearing, Trump administration attorney Eric Hamilton expressed the government’s fear that Anthropic might manipulate its software so it would not operate as the Pentagon expects. Judge Lin clarified that while the Defense Secretary’s role is to decide on appropriate vendors, her duty is to determine if the Secretary acted outside the law. She found it troubling that the broad directives limiting use of Anthropic’s Claude AI by government contractors did not appear narrowly tailored to address specific security concerns.
The dispute intensified last month when Defense Secretary Pete Hegseth posted on social media that military contractors were barred from any commercial activity with Anthropic. However, Hamilton later conceded in court that Hegseth lacks the legal authority to restrict contractors from using Anthropic for non-Defense Department work. When pressed by Judge Lin on why the Secretary made that post, Hamilton replied, “I don’t know.”
Judge Lin also questioned whether the Pentagon had explored less severe alternatives to distancing itself from Anthropic’s technology. She noted the supply-chain risk designation is a potent tool usually applied to foreign adversaries or hostile actors, not a domestic company. Anthropic’s attorney, Michael Mongan, called the government’s use of this authority against a “stubborn” negotiating partner extraordinary.
The Pentagon has stated it plans to replace Anthropic’s technology with alternatives from companies like Google, OpenAI, and xAI over the coming months, adding it has implemented safeguards to prevent tampering during the transition. A separate ruling in Anthropic’s other case, before a federal appeals court in Washington, D. C., is anticipated soon without a hearing.
(Source: Wired)




