White House Frustrated by Anthropic’s AI Limits on Law Enforcement

▼ Summary
– Anthropic’s AI models are restricted from being used for domestic surveillance, which has angered the Trump administration.
– Federal contractors working with agencies like the FBI and Secret Service have encountered obstacles when trying to use Claude for surveillance tasks.
– The conflict arises from Anthropic’s usage policies that prohibit domestic applications and are seen as selectively enforced and vaguely worded.
– Anthropic provides its AI services to national security customers and government agencies for a nominal $1 fee, with restrictions still barring weapons development use.
– OpenAI recently secured a similar agreement to supply ChatGPT Enterprise to federal workers, following a broader government deal with multiple AI providers.
The White House has reportedly grown increasingly frustrated with artificial intelligence firm Anthropic over its strict limitations on how law enforcement agencies can utilize its technology. While the company’s advanced Claude models are approved for handling classified material in certain national security contexts, Anthropic explicitly prohibits the use of its AI for domestic surveillance operations, a stance that has drawn sharp criticism from Trump administration officials.
According to recent reports, federal contractors collaborating with organizations such as the FBI and the Secret Service have encountered significant obstacles when attempting to deploy Claude for monitoring activities. Senior White House figures expressed concern that the company applies its policies inconsistently, potentially influenced by political considerations. They also pointed to what they see as intentionally vague language in Anthropic’s terms of service, which they argue allows the firm excessive leeway in interpreting its own restrictions.
This tension is particularly acute because, in some scenarios, Claude represents the only AI platform authorized for top-secret operations via Amazon Web Services’ GovCloud infrastructure. That makes the company’s refusal to support certain law enforcement applications especially problematic for agencies reliant on outside contractors for AI-driven analysis.
Anthropic does maintain an active partnership with the U.S. government, including a symbolic $1 agreement to supply its services to federal bodies. It also collaborates with the Department of Defense, though even there its models cannot be used in weapons development. The company has carved out a specialized offering for national security clients, underscoring its willingness to work within official channels—so long as those uses align with its ethical guidelines.
The situation highlights a broader competitive dynamic within the federal AI procurement space. Just one day after the General Services Administration approved blanket agreements for OpenAI, Google, and Anthropic to provide tools to government workers, OpenAI announced its own arrangement to supply ChatGPT Enterprise to more than two million federal employees at a deeply discounted rate. This move signals a growing rivalry among leading AI providers for lucrative public sector contracts, even as companies like Anthropic hold firm on their operational boundaries.
(Source: Ars Technica)





