The Dark Side of AI: Killer Chatbots

▼ Summary
– Anduril demonstrated AI-controlled drones successfully intercepting and destroying a simulated enemy aircraft using a large language model to process commands and coordinate the attack.
– The defense industry is rapidly integrating AI into military systems, with projects like Fury developing autonomous fighters to operate alongside crewed jets and streamline complex kill chains.
– US military AI spending has surged dramatically, with a 1,200% increase in federal AI contract funding and a dedicated $13.4 billion allocation in the 2026 defense budget for AI and autonomy.
– Major AI companies like Anthropic, Google, OpenAI, and xAI are now securing military contracts, a shift from Google’s 2018 withdrawal from Project Maven, which has evolved into a widely used military AI tool.
– Current AI models are considered too unreliable for direct battlefield decision-making, but are effective for intelligence gathering and cyber offense, with ongoing development focused on minimizing risks while enhancing real-time capabilities.
At a classified military installation situated roughly fifty miles from the border with Mexico, the defense technology firm Anduril is pioneering a groundbreaking application for large language models. During a demonstration last year, I observed four small jet aircraft, designated Mustang, approach from the western horizon. They flew in a coordinated formation over a barren terrain of rock and scrubland. To escape the glare of the sun, I shifted my attention to a computer monitor sheltered by a weathered tarp. With a few keystrokes, a fifth aircraft materialized on the screen, its silhouette deliberately modeled after a Chinese J-20 stealth fighter. An engineer named Colby, dressed casually in a black cap and sunglasses, issued a command to the AI system: “Mustang intercept.” An artificial intelligence, akin to the technology behind popular chatbots, processed the instruction, communicated with the drone squadron, and then confirmed in a calm, synthesized female voice, “Mustang collapsing.” In under sixty seconds, the drones swarmed the simulated intruder and eliminated it using virtual weaponry.
This demonstration underscores the defense sector’s vigorous push to integrate advanced AI into military operations. Anduril is actively developing a larger, autonomous fighter jet for the US Air Force under the Fury initiative, intended to operate in concert with piloted aircraft. While many current systems already function autonomously using older AI, the objective is to weave large language models into command structures, enabling them to convey orders and highlight critical data for human operators. It’s a peculiar concept, yet entirely consistent with the often-strange world of defense technology, where vast sums are invested in both revolutionary and questionable projects. The central promise is enhanced efficiency; modern kill chains are immensely complex, and AI theoretically simplifies them, a polite way of saying it makes them more lethal. American military strategists firmly believe that global dominance will belong to the nation that masters this technology. This conviction explains the US drive to restrict China’s advancement in AI and motivates the Pentagon’s plans to significantly increase funding in this area. The ongoing conflict in Ukraine, characterized by widespread use of inexpensive, AI-enhanced drones, has vividly illustrated the tactical advantages of autonomous systems on the modern battlefield.
The recent surge in generative AI has further amplified this interest. According to a 2024 analysis from the Brookings Institution, federal contract funding for AI projects skyrocketed by 1,200 percent between August 2022 and August 2023, with the Department of Defense accounting for the overwhelming majority. This trend has only intensified. The proposed trillion-dollar defense budget for 2026 includes a historic, dedicated allocation of $13.4 billion specifically for artificial intelligence and autonomous systems. For AI companies, the potential financial rewards from military contracts are enormous. This year, firms including Anthropic, Google, OpenAI, and xAI have each secured defense contracts valued at up to $200 million. This represents a dramatic reversal from 2018, when Google famously withdrew from Project Maven, an initiative to apply AI for analyzing aerial footage. That project, now managed by Palantir under the name Maven Smart Systems, has become one of the military’s most extensively deployed AI tools. Emelia Probasco, a researcher at Georgetown University focusing on military AI, notes that large language models are exceptionally good at intelligence gathering due to their proficiency in sifting through massive datasets. They are also naturally suited for cyber warfare because of their capacity to write and debug code. Probasco expresses concern about the “magical fairy dust” notion that AI is so intelligent it can single-handedly prevent or win wars. She cautions that current models remain too unreliable, prone to errors, and opaque in their decision-making to be trusted with battlefield judgments or direct control over weapon systems.
A primary challenge for developers is finding ways to deploy AI that capitalize on its strengths while mitigating its considerable risks. Last September, a consortium including Anduril and Meta bid on a US Army contract worth up to $159 million to create a rugged augmented reality helmet display for soldiers. This system is designed to provide troops with vital mission data while simultaneously interpreting their environment, utilizing a new generation of AI models better equipped to understand the physical world in real time. Looking further ahead, the prospect of fully robotic soldiers is a topic of serious discussion. Michael Stewart, a former fighter pilot who previously led the US Navy’s office for disruptive capabilities and advocated for AI experimentation with the Fifth Fleet, believes warfare is on an inevitable path toward heavy automation. Stewart, who now operates a global consulting firm, predicts that within ten to twenty years, we will see robots operating with significant autonomy on the battlefield. He suggests that if these systems are powered by large language models, they will not merely be silent participants in conflict. They will possess the ability to articulate, in their own words, the actions they took and the reasoning behind them.
(Source: Wired)