AI & TechArtificial IntelligenceBigTech CompaniesNewswireTechnology

OpenAI Researcher Quits Over ChatGPT Ads, Warns of “Facebook” Path

▼ Summary

– Former OpenAI researcher Zoë Hitzig resigned and publicly criticized the company’s new advertising strategy in ChatGPT, comparing its risks to Facebook’s past mistakes.
– Hitzig argued that ads are particularly risky for ChatGPT due to the unprecedented archive of deeply personal user disclosures shared with the chatbot.
– She warned that, like Facebook’s eroded privacy pledges, OpenAI’s economic incentives could lead it to override its own rules on ads over time.
– OpenAI is testing clearly labeled ads in responses for free and lower-tier users, stating they will not influence the chatbot’s answers.
– Hitzig’s resignation adds to a growing industry debate, stemming from her belief that OpenAI has stopped asking the crucial questions she joined to help answer.

A former OpenAI researcher has publicly resigned, citing the company’s new advertising tests within ChatGPT as a catalyst for her departure and warning that the move risks repeating the problematic history of platforms like Facebook. Zoë Hitzig, an economist and poet who spent two years at the AI firm, published an essay explaining her decision to leave, arguing that introducing ads into a platform where users share deeply personal information creates unprecedented risks. She expressed concern that the company is building an economic engine with incentives that may eventually override its own stated principles for user privacy and data protection.

Hitzig did not condemn advertising as inherently unethical. Her primary concern lies in the unique nature of the data collected by ChatGPT. Users frequently confide in the chatbot, sharing intimate details about their health, personal relationships, and spiritual beliefs. They often do so under the assumption they are interacting with a neutral entity free from commercial motives. Hitzig describes this vast collection of personal disclosures as “an archive of human candor that has no precedent.” Placing advertisements within this environment, she argues, fundamentally changes that dynamic and exploits a uniquely vulnerable form of trust.

To illustrate her point, Hitzig drew a direct comparison to the evolution of Facebook. She noted that the social media giant initially made strong commitments regarding user control over data and participatory policy changes. Over time, those promises eroded. Regulatory bodies like the Federal Trade Commission later found that some privacy updates Facebook marketed as empowering actually reduced user control. Hitzig fears OpenAI could follow a similar path, where initial, carefully managed ad implementations give way to more intrusive models driven by powerful financial incentives.

“I believe the first iteration of ads will probably follow those principles,” Hitzig wrote, referencing OpenAI’s current promises. “But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules.” Her resignation stems from a growing belief that the organization has shifted away from the foundational questions about AI’s societal impact that originally drew her to work there.

This departure adds a significant voice to an ongoing industry debate, arriving just as OpenAI begins its advertising experiment. The company recently confirmed it is testing ads in the United States for users on its free and lower-cost subscription tiers. These promotions appear at the bottom of ChatGPT responses, are labeled as advertisements, and reportedly do not influence the AI’s answers. Paid subscribers at higher tiers will not see these ads. Despite these safeguards, Hitzig’s warning highlights a broader tension between monetizing powerful AI tools and maintaining the integrity of the human conversations they facilitate.

(Source: Ars Technica)

Topics

ai advertising 95% resignation announcement 90% Data Privacy 88% user trust 85% facebook comparison 82% economic incentives 80% AI ethics 78% chatgpt features 75% industry debate 72% personal disclosures 70%