AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

OpenAI Robotics Lead Quits Over Pentagon Contract

▼ Summary

– Caitlin Kalinowski resigned from leading OpenAI’s robotics team, citing the company’s rushed agreement with the Department of Defense as her reason.
– She specifically objected to the lack of defined guardrails against domestic surveillance without oversight and lethal autonomous weapons.
– OpenAI defended the agreement, stating it establishes red lines against those uses and creates a path for responsible AI in national security.
– The deal was made after similar discussions with Anthropic failed, leading the Pentagon to designate Anthropic a supply-chain risk.
– The controversy has damaged OpenAI’s consumer reputation, leading to a surge in ChatGPT uninstalls and boosting competitor Claude’s app store ranking.

A senior hardware executive at OpenAI has resigned from her position leading the robotics team, citing significant ethical concerns over the company’s newly announced contract with the U.S. Department of Defense. Caitlin Kalinowski stated the decision was driven by principle, emphasizing that the potential for domestic surveillance without judicial oversight and the development of lethal autonomous weapons systems warranted far more deliberation than they received. She clarified that her primary issue was a governance failure, arguing the announcement was rushed without clearly defined ethical guardrails in place.

Kalinowski, who previously led augmented reality hardware development at Meta, joined OpenAI just a few months ago in late 2024. In a public statement, she expressed deep respect for CEO Sam Altman and her colleagues but felt compelled to leave over the fundamental direction implied by the Pentagon agreement. She stressed that while artificial intelligence undoubtedly has a role in national security, certain applications cross critical ethical boundaries that must be rigorously debated and safeguarded.

In response, an OpenAI spokesperson confirmed the departure and defended the partnership. The company asserts its agreement establishes a responsible framework for national security applications while explicitly prohibiting two key activities: domestic surveillance and fully autonomous weapons. OpenAI maintains that its approach involves not only contractual language but also technical safeguards to enforce these red lines, positioning it as a more comprehensive solution than those proposed by other firms.

This defense comes in the wake of a failed negotiation between the Pentagon and AI firm Anthropic, which sought stronger contractual protections against its technology being used for mass surveillance or autonomous weapons. Following the breakdown, the Pentagon labeled Anthropic a supply-chain risk, a designation the company plans to contest legally. Major cloud providers like Microsoft, Google, and Amazon have stated they will continue offering Anthropic’s Claude model to non-defense clients.

The swift announcement of OpenAI’s own deal, which permits its technology to be used in classified environments, has sparked considerable backlash. The controversy has had a tangible impact on the competitive landscape, with reported surges in ChatGPT uninstalls and a corresponding rise in downloads for competing apps like Claude. Industry observers note a shift in consumer preference, as Claude and ChatGPT currently hold the top two spots for free apps in the U.S. App Store, a direct reflection of the public’s heightened scrutiny over AI ethics and corporate partnerships.

(Source: TechCrunch)

Topics

executive resignation 95% AI ethics 90% defense agreement 88% National Security 85% corporate governance 80% autonomous weapons 78% domestic surveillance 75% ai safeguards 72% industry competition 70% public backlash 68%