Trump Seeks to Ban AI Firm Anthropic From US Government

▼ Summary
– President Trump ordered all federal agencies to immediately stop using Anthropic’s AI tools, citing a dispute over military applications.
– The Pentagon wants to change its contract with Anthropic to allow “all lawful use” of its AI, which the company fears could enable autonomous weapons or mass surveillance.
– Anthropic has a $200 million deal with the Pentagon and provides custom AI models (Claude Gov) for tasks like intelligence analysis and military planning.
– The conflict tests the tech industry’s shift toward defense work, with some employees at other AI firms supporting Anthropic’s stance against unrestricted military use.
– Despite the order, a six-month phase-out period was announced, allowing time for potential negotiations between the government and Anthropic.
In a significant policy shift, the White House has directed all federal agencies to stop using artificial intelligence tools developed by Anthropic. This directive follows a prolonged and public disagreement between the AI research company and defense officials regarding the ethical boundaries of military applications for advanced AI systems. The order includes a six-month transition period, suggesting a window remains for potential negotiations.
President Donald Trump announced the move, stating in a social media post that “the Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.” The core of the conflict stems from the Department of Defense’s desire to amend an existing contract. The Pentagon seeks to remove specific restrictions on AI deployment, replacing them with a broad allowance for “all lawful use” of the technology. Anthropic has resisted this change, arguing it could pave the way for AI to operate lethal autonomous weapons systems or enable widespread surveillance of American citizens.
While the Pentagon asserts it does not currently employ AI for such purposes and has no plans to do so, senior administration figures have expressed strong opposition to allowing a private technology firm to impose limits on military use of what they consider a strategically vital tool. Anthropic was a pioneer in this space, becoming the first major AI lab to partner with the U.S. military through a substantial $200 million agreement last year. This partnership led to the creation of specialized models, known as Claude Gov, which have fewer built-in safeguards than the company’s commercial offerings.
These customized models are accessible via platforms from Palantir and Amazon’s cloud services, designed for handling classified information. Sources indicate Claude Gov is primarily utilized for routine administrative duties, such as drafting reports and condensing lengthy documents. However, its applications also extend to more sensitive areas, including intelligence assessment and strategic military planning. Other prominent AI firms like Google, OpenAI, and xAI entered into similar defense agreements, but Anthropic remains uniquely involved with classified systems.
The current standoff highlights a broader cultural transformation within the technology sector. Silicon Valley’s relationship with defense contracting has evolved from cautious avoidance to active participation, and this dispute tests the limits of that new engagement. The tension has sparked internal debates at other companies; hundreds of employees from OpenAI and Google recently signed an open letter backing Anthropic’s stance and criticizing their own employers for relaxing constraints on military AI use.
In a related development, OpenAI’s leadership communicated to staff that they share Anthropic’s concerns, identifying mass surveillance and fully autonomous weapons as unacceptable “red lines.” Despite this, reports indicate OpenAI is simultaneously seeking a revised agreement that would permit its continued collaboration with the Pentagon. The friction between Anthropic and defense authorities became public following reports that military planners used Claude AI to assist in strategizing a high-stakes operation targeting Venezuelan leadership.
After that incident, concerns allegedly originating from an Anthropic employee about the model’s application were conveyed to U.S. military officials through Palantir. Anthropic has officially denied raising any objections or interfering with the Pentagon’s operations. The disagreement intensified recently, with officials and company representatives exchanging criticisms on public forums. A meeting occurred this week between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, where Hegseth reportedly issued a deadline for the company to accept the revised contract terms while expressing a desire to maintain the working relationship.
(Source: Wired)





