AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

OpenAI Secures Pentagon AI Contract Amidst Rival Ban

Originally published on: February 28, 2026
▼ Summary

– OpenAI has secured an agreement with the Pentagon to deploy its AI models within the Department of Defense’s classified military systems.
– The Trump administration simultaneously ordered all federal agencies to cease using rival Anthropic’s AI and designated the company a national security supply chain risk.
– The conflict arose because Anthropic sought contractual safeguards against using its AI for domestic mass surveillance or fully autonomous weapons, which the Pentagon resisted.
– OpenAI’s deal includes principles prohibiting domestic mass surveillance and requiring human responsibility in force, which the Pentagon states are already reflected in existing law and policy.
– This episode highlights how AI has rapidly become a central and politically charged element of national security policy and military procurement.

In a significant development for the integration of artificial intelligence into national defense, OpenAI has secured a contract with the Pentagon to deploy its AI models within classified military systems. This agreement was announced shortly after a presidential directive ordered federal agencies to halt the use of technology from rival firm Anthropic, highlighting a deepening rift over the role of commercial AI in military operations.

The announcement from OpenAI CEO Sam Altman detailed a partnership with the Department of Defense, allowing the use of its models on the department’s classified network. This move follows a swift escalation between the current administration and Anthropic, centered on permissible military applications for artificial intelligence. Earlier the same day, a directive mandated all federal agencies to immediately cease using Anthropic’s products. Furthermore, the Defense Secretary designated the company a supply chain risk to national security,” a classification that triggers procurement restrictions similar to those applied to certain foreign telecom firms in recent years.

This designation compels the Pentagon to phase out its use of Anthropic’s systems and requires military contractors to certify that their work for the Defense Department does not involve the company’s AI tools. A six-month transition period has been provided. The core of the confrontation lies in a fundamental disagreement: whether AI companies can legally impose limits on how the military utilizes their technology.

Anthropic had sought specific contractual assurances that its flagship Claude model would not be employed for domestic mass surveillance of American citizens or to power fully autonomous weapon systems. While the Pentagon has stated it does not intend to use AI for those purposes, it has firmly insisted that AI models must remain available for all lawful military applications without such restrictions. After negotiations stalled, officials accused Anthropic of attempting to impose ideological constraints on military operations, a charge the company denied, framing its concerns as narrowly focused on safety and constitutional rights.

Shortly after the administration’s actions against Anthropic, Altman revealed that OpenAI had finalized its own pact with the Pentagon. In a social media post, he stated the agreement incorporates two of OpenAI’s key safety principles: prohibitions on domestic mass surveillance and a requirement for human responsibility in the use of force, including within autonomous weapon systems. He noted that the Department of Defense agrees with these principles, which are reflected in existing law and policy.

It is not yet clear how OpenAI’s contractual safeguards differ in practical terms from those Anthropic sought. Pentagon officials have argued that existing U.S. law and Defense Department policy already prohibit the very applications Anthropic worried about, making new contractual standards unnecessary. The dispute grew increasingly political, with the administration criticizing Anthropic’s stance as an overreach that attempts to override constitutional authority and control operational military decisions.

Anthropic has challenged the supply chain risk designation, arguing it exceeds the Pentagon’s statutory authority. The company contends federal law limits such determinations to specific defense contracts and does not grant the executive branch broad power to block all commercial activity with a domestic firm. Anthropic has stated its intention to contest the decision in court. Applying this designation, more commonly associated with foreign-owned security threats, to a U.S.-based AI developer represents a notable shift in how procurement rules are being leveraged in the AI arena.

This episode illustrates the speed at which artificial intelligence has become entrenched in national security policy. While several major AI firms, including OpenAI, Anthropic, Google, and xAI, have secured agreements for military use of their models, concerns over surveillance, autonomous weapons, and model reliability have intensified scrutiny of these deployments. For Anthropic, now facing legal and reputational challenges, the disputed Pentagon contract, valued up to $200 million, is a symbolically significant part of its business. For OpenAI, the deal solidifies its role as a key partner in the Pentagon’s AI strategy while allowing it to publicly uphold its safety commitments.

Whether the divergent outcomes for these two companies stem from substantive differences in contract terms or simply contrasting negotiation tactics remains uncertain. What is evident is that the dynamic between leading AI developers and the U.S. military has entered a new, more visible, and politically charged chapter.

(Source: The Next Web)

Topics

military ai 95% government contracts 90% ai companies 89% ai safety 88% political dispute 87% supply chain risk 85% National Security 83% autonomous weapons 82% domestic surveillance 80% ai scrutiny 80%