AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnologyWhat's Buzzing

Sam Altman Announces OpenAI’s Pentagon Deal with Safeguards

Originally published on: March 1, 2026
▼ Summary

– OpenAI has agreed to allow the Department of Defense to use its AI models on classified networks, with specific safety principles included in the contract.
– The agreement follows a failed negotiation between the Pentagon and Anthropic, which objected to uses like mass domestic surveillance and fully autonomous weapons.
– The U.S. government has criticized and moved to cut ties with Anthropic, designating it as a supply-chain risk for military contractors.
– OpenAI’s CEO stated their contract prohibits domestic mass surveillance and requires human responsibility for the use of force, with technical safeguards in place.
– More than 360 employees from OpenAI and Google signed an open letter supporting Anthropic’s ethical stance on military AI use.

In a significant shift for the artificial intelligence sector, OpenAI has finalized a deal permitting the Department of Defense to utilize its AI models within classified military networks. CEO Sam Altman revealed the agreement late Friday, positioning it as a model for responsible military collaboration. The announcement follows a contentious public dispute between the Pentagon and OpenAI’s competitor, Anthropic, over ethical boundaries for AI in defense applications.

The Pentagon had been pressing AI firms to permit the use of their technology for “all lawful purposes.” Anthropic, however, publicly resisted, drawing firm lines against applications involving mass domestic surveillance and fully autonomous weapons systems. In a detailed statement, Anthropic CEO Dario Amodei contended that in specific scenarios, AI could potentially undermine democratic values rather than defend them. This stance garnered support from hundreds of employees at both OpenAI and Google, who signed an open letter backing Anthropic’s position.

The disagreement escalated when the Pentagon and Anthropic failed to reach terms. Former President Donald Trump criticized the company in a social media post, labeling its leadership “Leftwing nut jobs” and directing federal agencies to phase out use of its products. Defense Secretary Pete Hegseth went further, accusing Anthropic of attempting to seize veto power over military decisions and designating the company as a supply-chain risk, effectively barring military contractors from doing business with it. Anthropic responded that it had not received formal communication on the negotiation status and vowed to legally challenge any such designation.

Amid this conflict, Altman’s announcement presented OpenAI’s path as a contrasting solution. He emphasized that the new contract incorporates explicit safeguards aligned with the very principles that caused the rift with Anthropic. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman stated. He confirmed that the Department of Defense agrees with these principles, which are reflected in both law and the new agreement.

Altman detailed that OpenAI will build technical safeguards to ensure model behavior aligns with these rules and will deploy engineers to work alongside Pentagon personnel. He extended an olive branch to the wider industry, urging the Defense Department to offer the same terms to all AI companies and expressing a desire to de-escalate tensions away from legal actions. Internally, Altman assured employees that the government will allow OpenAI to construct its own “safety stack” to prevent misuse, and importantly, would not force the company to make a model perform a task it refuses to do.

This landmark deal was unveiled just as news emerged of escalating international military action, underscoring the timely and critical nature of discussions about artificial intelligence’s role in global security and warfare.

(Source: TechCrunch)

Topics

ai military use 95% corporate agreements 90% Ethical AI 88% government conflict 85% public statements 82% ai safety 80% corporate rivalry 78% political criticism 75% democratic values 73% supply chain risk 72%