AI Warfare Leaves Safety in the Dust

▼ Summary
– A few years ago, there was a strong global consensus that serious AI regulation and corporate safety prioritization were necessary and inevitable.
– Recent events, including a contract dispute where the Pentagon sought to remove restrictions on military AI use, have severely damaged hopes for effective AI safety oversight.
– The lack of binding international agreements is fueling an unavoidable AI arms race, as militaries feel compelled to adopt all forms of AI to keep pace with adversaries.
– Anthropic’s key safety initiative, its Responsible Scaling Policy, has failed to create industry-wide consensus as the policy environment now prioritizes AI competitiveness over safety.
– Competition between AI companies has become intensely cutthroat, exemplified by OpenAI quickly securing a Pentagon contract after Anthropic’s dispute, undermining collective safety efforts.
The landscape of artificial intelligence has shifted dramatically, with recent events signaling a troubling departure from earlier commitments to safety and ethical governance. Just a few years ago, a broad consensus existed among tech firms, lawmakers, and the public that robust AI regulation was not only necessary but inevitable. There was talk of international frameworks designed to manage the risks of this powerful technology, ensuring it would be handled with more caution than previous innovations. Companies publicly pledged to prioritize safety over competitive advantage and financial gain. While skeptics warned of worst-case scenarios, a global understanding appeared to be forming: the benefits of AI could be harnessed only by first establishing clear limits on its potential for harm.
That hopeful outlook has suffered a significant setback. The catalyst was a very public dispute between the Pentagon and the AI company Anthropic. Their original contract, established at Anthropic’s request, explicitly prohibited the Department of Defense from using the Claude AI models for autonomous weaponry or mass surveillance of U.S. citizens. The Pentagon has now moved to remove those restrictions. Anthropic’s refusal to comply did not just end their partnership; it led Defense Secretary Pete Hegseth to label the company a supply-chain risk, effectively barring all government agencies from working with them. Beyond the contractual details and reported personal tensions, the core message is stark: the military appears intent on resisting any external constraints on its use of AI, operating solely within its own interpretation of legal boundaries.
This confrontation raises a profound question about how autonomous lethal force became a viable topic for military planning. When did the deployment of killer drones and AI-guided bombs transition from dystopian fiction to a serious consideration? The international debate on the ethics of lethal autonomous systems patrolling battlefields or borders seems to have been bypassed entirely. Secretary Hegseth frames private companies imposing limits on the military as an absurdity. A more compelling argument is that it is alarming a single corporation must risk its entire existence to act as a check on a potentially runaway technology. In the absence of any binding global treaties, every advanced military now feels compelled to pursue all forms of AI capability simply to avoid falling behind adversaries. An uncontrolled AI arms race now seems all but certain.
The dangers, however, are not confined to the battlefield. Overshadowed by the Pentagon feud was a concerning update from Anthropic on February 24th regarding its Responsible Scaling Policy (RSP). This policy was a cornerstone of the company’s founding ethos, creating a formal link between the release of new AI models and the implementation of corresponding safety measures. It mandated that models should not launch without guardrails to prevent catastrophic misuse, serving as an internal mechanism to ensure safety kept pace with capability. More ambitiously, Anthropic envisioned its RSP sparking a “race to the top,” where adopting such rigorous standards would pressure or inspire rivals to follow suit, ultimately shaping industry-wide norms and pre-emptive regulation.
Initially, this strategy showed promise. Other leading labs like DeepMind and OpenAI incorporated elements of the framework. Yet as investment surged and competition intensified, with the likelihood of federal regulation fading, Anthropic conceded its policy had failed to achieve its goal. The safety thresholds it established did not foster the broad industry consensus on risk it had hoped for. The company acknowledged in a statement that the policy environment has decisively “shifted toward prioritizing AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level.”
The rivalry among AI giants has indeed grown fiercely competitive, evolving from a theoretical “race to the top” into a bare-knuckle struggle for dominance. When the Pentagon severed ties with Anthropic, OpenAI swiftly moved to secure its own contract with the Defense Department. OpenAI’s CEO, Sam Altman, claimed his company’s quick action was meant to alleviate pressure on Anthropic. Dario Amodei, Anthropic’s CEO, publicly rejected that explanation. In an internal memo, Amodei accused Altman of trying to “undermine our position while appearing to support it,” suggesting the move was designed to isolate Anthropic and make government retaliation easier. (Amodei later apologized for his tone.) This episode starkly illustrates how the drive for market position and government favor is rapidly eclipsing earlier collaborative ideals on safety.
(Source: Wired)





