Google and OpenAI Staff Back Anthropic’s Pentagon Stance in Open Letter

▼ Summary
– Anthropic is refusing the U.S. Department of War’s demand for unrestricted access to its AI technology, specifically opposing its use for domestic mass surveillance and autonomous weaponry.
– Over 300 Google and 60 OpenAI employees have signed an open letter urging their companies to support Anthropic and refuse the military’s unilateral demands.
– While company leaders have not formally responded, informal statements from OpenAI’s CEO and a Google DeepMind scientist show sympathy for Anthropic’s position against surveillance and autonomous weapons.
– The Pentagon has threatened to declare Anthropic a supply chain risk or invoke the Defense Production Act to force compliance if the company does not concede.
– Anthropic’s CEO maintains the company’s refusal, stating the threats do not change their position and they cannot in good conscience agree to the request.
A significant standoff between artificial intelligence firm Anthropic and the U.S. Department of Defense has drawn public support from hundreds of employees at rival tech giants. Over 300 staff members from Google and more than 60 from OpenAI have endorsed an open letter, urging their leadership to back Anthropic’s refusal to grant the Pentagon unrestricted access to its AI systems. The military’s deadline for compliance is fast approaching, raising the stakes in a debate over the ethical boundaries of artificial intelligence in national security.
The core of the dispute centers on two specific applications: domestic mass surveillance and autonomous weaponry. Anthropic has consistently refused to allow its technology to be used for these purposes, a position now bolstered by the employee-led letter. The signatories argue that a united front is essential, stating that the Pentagon’s strategy relies on dividing the companies. They have called on Google and OpenAI executives to publicly support these red lines and collectively refuse the current demands.
While neither Google nor OpenAI has issued an official corporate response, influential figures have voiced sympathy for Anthropic’s stance. OpenAI CEO Sam Altman commented that he does not believe the Pentagon should be threatening companies with the Defense Production Act. A company spokesperson also confirmed that OpenAI shares Anthropic’s opposition to autonomous weapons and mass surveillance. From Google’s camp, Chief Scientist Jeff Dean posted on social media that mass surveillance violates constitutional principles and stifles free expression, noting such systems are easily misused for political or discriminatory aims.
The Pentagon’s pressure tactics have been direct. Defense Secretary Pete Hegseth informed Anthropic CEO Dario Amodei that failure to concede would result in the company being declared a supply chain risk or face compelled compliance through the Defense Production Act. Amodei rejected these threats, calling them contradictory and reaffirming that the company cannot in good conscience agree to the military’s request. This conflict persists despite an existing partnership between Anthropic and the Defense Department for other purposes.
Currently, the military can utilize some commercial AI tools, like X’s Grok, Google’s Gemini, and OpenAI’s ChatGPT, for unclassified work. However, negotiations are ongoing to adapt these technologies for classified environments. The employee letter and executive comments highlight a growing tension within the tech industry between supporting national security objectives and adhering to self-imposed ethical guardrails. The outcome of this standoff may set a critical precedent for how AI companies engage with government agencies on morally fraught applications.
(Source: TechCrunch)





