Sam Altman: Bots Are Making Social Media Feel “Fake”

▼ Summary
– Sam Altman questioned the authenticity of social media posts, finding it difficult to distinguish between human and bot content on platforms like Reddit.
– He noted that human users are adopting language patterns similar to LLMs, and social media incentives may encourage engagement-driven or artificial posts.
– Altman suggested that past astroturfing by competitors makes him more sensitive to potentially fake pro-OpenAI content, though no evidence was provided.
– Data indicates a significant portion of internet traffic is non-human, with estimates of hundreds of millions of bots on platforms like X.
– The rise of LLMs has blurred lines between human and automated content, affecting not only social media but also education, journalism, and legal systems.
Sam Altman, the prominent X enthusiast and Reddit shareholder, recently voiced a striking concern: the line between human and automated content has blurred to the point where social media authenticity feels deeply compromised. His observation came after engaging with posts on the r/Claudecode subreddit, where users were enthusiastically discussing OpenAI Codex, a programming tool positioned as a rival to Anthropic’s Claude Code.
Altman noted the peculiar frequency of posts from individuals claiming to have switched to Codex, prompting a Reddit user to humorously ask whether announcing such a move was obligatory. This led Altman to question how many of these enthusiastic endorsements were genuinely human. He admitted on X that even though Codex’s growth is legitimate, the sheer volume of similar-sounding praise made the overall experience feel artificial.
He broke down his reasoning in real time, pointing to several overlapping factors. People have begun adopting linguistic quirks typical of large language models, online communities often converge around extreme and correlated behaviors, and the relentless push for engagement on social platforms, coupled with monetization strategies, creates an environment ripe for inauthentic activity. Altman also acknowledged his own heightened sensitivity to astroturfing, given that competing firms have targeted OpenAI in the past.
It’s an ironic twist: AI models, including those developed by OpenAI, were designed to emulate human expression. These very models were trained on platforms like Reddit, where Altman served as a board member until 2022 and remains a significant shareholder. His comments underscore a broader unease about how fandoms and hyper-engaged users can amplify groupthink, sometimes devolving into negativity when frustrations mount.
Altman didn’t shy away from critiquing the structural incentives that reward engagement above authenticity. But he also raised the possibility that some pro-OpenAI sentiment might itself be artificially generated, a nod to the pervasive and often covert nature of influence campaigns in tech spaces.
This isn’t just theoretical. When OpenAI released GPT 5.0, the reaction within its own subreddits was far from uniformly positive. Instead of celebration, users voiced strong criticisms about the model’s performance and usability. Altman even hosted an AMA to address complaints, but the community’s trust never fully rebounded. The episode left lingering questions: Were the critical voices real users, or part of a coordinated effort?
Altman’s conclusion is sobering: AI-related discussions on platforms like X and Reddit now carry an air of artificiality that wasn’t present just a couple of years ago. The implications stretch far beyond social media, affecting education, journalism, and legal systems where AI-generated content is increasingly prevalent.
Data from Imperva indicates that over half of all internet traffic in 2024 was non-human, driven largely by LLM activity. X’s own Grok bot estimates that hundreds of millions of automated accounts operate on the platform. These figures suggest a profound shift in how information is created and consumed online.
Some skeptics speculate that Altman’s public musings may be a strategic preamble to OpenAI’s rumored entry into social media, a project reportedly in early development aimed at competing with giants like X and Facebook. Whether or not that’s the case, it invites a deeper question: If OpenAI were to launch a social network, could it possibly remain free of bots? And if it tried to exclude humans entirely, research suggests the outcome might not be much better. An experiment at the University of Amsterdam where bots populated a social network showed that they quickly formed insular cliques and echo chambers, behaving, in other words, a lot like people.
(Source: TechCrunch)