Elon Musk lawsuit scrutinizes OpenAI’s safety record

▼ Summary
– A former OpenAI employee testified that the company shifted from a research-focused to a product-focused organization, compromising its commitment to AI safety.
– The witness cited the deployment of GPT-4 in India via Microsoft’s Bing without evaluation by OpenAI’s Deployment Safety Board as a failure of safety processes.
– Former board member Tasha McCauley testified that CEO Sam Altman misled the non-profit board, including lying about board member intentions and failing to disclose the public launch of ChatGPT.
– The non-profit board’s attempt to fire Altman in 2023 failed when staff sided with him and Microsoft intervened, leading to the board’s reversal and resignation of opposing members.
– Expert witness David Schizer stated that OpenAI’s mission to prioritize safety over profits requires taking safety rules seriously, with the process of review being critical.
A federal court in Oakland, California, heard testimony on Thursday that directly challenges OpenAI’s commitment to its founding mission, as Elon Musk’s lawsuit seeks to dismantle the company. The central question is whether OpenAI’s for-profit subsidiary has undermined its original goal of ensuring humanity benefits from artificial general intelligence (AGI).
Rosie Campbell, a former employee who joined OpenAI’s AGI readiness team in 2021 and left in 2024 after her team was dissolved, testified that the company’s focus shifted dramatically. “When I joined, it was very research-focused and common for people to talk about AGI and safety issues,” she said. “Over time it became more like a product-focused organization.” Campbell’s team, along with the Super Alignment team, were both disbanded during her tenure, signaling a retreat from dedicated safety research.
Under cross-examination, Campbell acknowledged that building AGI requires significant funding, but she stressed that creating a super-intelligent model without robust safety measures contradicts the organization’s original purpose. She cited a specific incident where Microsoft deployed a version of GPT-4 in India through Bing before OpenAI’s Deployment Safety Board (DSB) had evaluated it. While the model itself posed minimal risk, Campbell argued that “to set strong precedents as the technology gets more powerful,” the company needed reliable safety processes that are consistently followed.
OpenAI’s attorneys pressed Campbell to admit that, in her “speculative opinion,” the company’s safety approach is better than that of xAI, Elon Musk’s AI venture acquired by SpaceX earlier this year. OpenAI has publicly released model evaluations and a safety framework, but declined to comment on its current AGI alignment strategy. In February, the company hired Dylan Scandinaro from Anthropic as its head of preparedness, a move CEO Sam Altman said would let him “sleep better tonight.”
The GPT-4 deployment in India was one of several red flags that led OpenAI’s non-profit board to briefly fire Altman in 2023. Former board member Tasha McCauley testified that employees, including then-chief scientist Ilya Sutskever and then-CTO Mira Murati, complained about Altman’s conflict-averse management style. McCauley also described a pattern of Altman misleading the board, including lying about her intention to remove fellow board member Helen Toner, who had published a white paper with implied criticism of OpenAI’s safety policy. Additionally, Altman failed to disclose the decision to launch ChatGPT publicly and did not reveal potential conflicts of interest.
“We are a non-profit board and our mandate was to be able to oversee the for-profit underneath us,” McCauley told the court. “Our primary way to do that was being called into question. We did not have a high degree of confidence at all to trust that the information being conveyed to us allowed us to make decisions in an informed way.”
The board’s decision to oust Altman coincided with a tender offer to employees. When staff rallied behind Altman and Microsoft worked to restore the status quo, the board reversed course, and members opposed to Altman resigned. This sequence of events directly supports Musk’s claim that OpenAI’s transformation from a research lab into one of the world’s largest private companies broke the founders’ implicit agreement.
David Schizer, a former dean of Columbia Law School serving as an expert witness for Musk’s team, echoed McCauley’s concerns. “OpenAI has emphasized that a key part of its mission is safety and they are going to prioritize safety over profits,” Schizer said. “Part of that is taking safety rules seriously, if something needs to be subject to safety review, it needs to happen. What matters is the process issue.”
The implications extend beyond a single lab, as AI is now deeply embedded in for-profit companies. McCauley argued that OpenAI’s governance failures underscore the need for stronger government regulation. “[If] it all comes down to one CEO making those decisions, and we have the public good at stake, that’s very suboptimal,” she said.
(Source: TechCrunch)




