AI & TechArtificial IntelligenceNewswireStartupsTechnology

Inside the AI Models Powering Modern Warfare

▼ Summary

– Some AI startups, like Smack Technologies, are actively developing advanced AI specifically for military applications, contrasting with companies like Anthropic that have reservations.
– Smack’s CEO, a former special forces commander, argues that ethical use should be governed by the military personnel deploying the technology, not by blanket bans on military use.
– The startup trains its AI models using war game scenarios and expert feedback to identify optimal military mission plans, employing a method similar to reinforcement learning.
– A key debate highlighted is the unsuitability of general-purpose AI models (like Claude) for military tasks, as they lack specific training and physical-world understanding for operations like target identification.
– While autonomous weapons already exist in limited capacities, the reliability of AI in high-stakes decision-making, such as escalating conflicts, remains a significant and unresolved concern.

The integration of artificial intelligence into military strategy represents a profound shift in modern defense capabilities, moving beyond theoretical discussions into active development and deployment. While major AI labs often impose strict ethical boundaries on military applications, a new wave of specialized defense technology startups is emerging with a different philosophy. These companies are building advanced AI systems explicitly designed for the battlefield, aiming to provide a tactical edge through enhanced planning and decision-making automation.

One such company, Smack Technologies, recently secured $32 million in funding to develop models its leadership claims will soon outperform general-purpose AI like Anthropic’s Claude in planning and executing military operations. The startup’s approach contrasts sharply with firms expressing reservations about unfettered military use. Smack’s CEO, Andy Markoff, a former commander in the U.S. Marine Forces Special Operations Command, argues that ethical deployment rests with the people in uniform, not the technology itself. He emphasizes that service members swear an oath to operate lawfully and honorably, suggesting the responsibility for ethical use lies with the human chain of command.

Markoff co-founded Smack with fellow ex-Marine Clint Alanis and computer scientist Dan Gould, formerly of Tinder. Their models are trained using a method inspired by systems like Google’s AlphaGo, employing trial and error within simulated war game scenarios. Expert military analysts provide feedback, teaching the AI to identify optimal strategies by signaling whether a chosen plan is likely to succeed. Although operating on a smaller budget than major AI labs, the company is investing millions to train its initial models, focusing on creating a system with a deeper, more practical understanding of military operations and physical-world constraints.

This specialized development addresses what Markoff sees as a critical gap. He points out that today’s powerful large language models are not built for military applications. While they excel at tasks like summarizing reports, these general-purpose AI systems lack training on specific military data and a nuanced grasp of real-world physics, making them poorly suited for controlling hardware or complex operational planning. Markoff is adamant that current models are incapable of reliable target identification and states that no serious discussion within the Department of Defense involves fully automating the decision chain for lethal force.

The debate over AI in warfare intensified recently after negotiations between the Department of Defense and Anthropic broke down, partly over limits on using AI in autonomous weapons. This disagreement led the Defense Secretary to label Anthropic a supply chain risk. However, experts note that autonomous systems are already in use. Nations including the U.S. employ them in areas like missile defense, where reaction times must exceed human capabilities. Legal scholar Rebecca Crootof confirms that over thirty countries deploy weapon systems with varying levels of autonomy, some of which qualify as fully autonomous.

Looking ahead, Smack envisions its technology assisting commanders by automating the labor-intensive parts of mission planning, a process still often conducted with whiteboards and notepads. In a potential high-stakes conflict with a rival power, Markoff suggests that AI-driven decision-making could provide a crucial advantage, offering what he terms “decision dominance.” Yet significant questions about reliability remain. A concerning experiment from a researcher at King’s College London demonstrated that large language models, when used in war game simulations, showed a troubling tendency to escalate conflicts, including nuclear standoffs. This highlights the unpredictable risks and profound ethical challenges that accompany the push to weaponize artificial intelligence.

(Source: Wired)

Topics

military ai 95% autonomous weapons 90% AI ethics 85% specialized ai 85% large language models 80% mission planning 80% ai training 75% defense contracts 75% war games 75% legal issues 70%