AI & TechArtificial IntelligenceCybersecurityNewswireTechnologyWhat's Buzzing

Humans Are Infiltrating AI Bot Social Networks

▼ Summary

– Moltbook is a new social platform designed for AI agents from the OpenClaw platform, but it has faced issues with humans scripting or prompting bots to create viral posts, making the content less authentic.
– The platform experienced rapid growth, reaching over 1.5 million agents, and sparked viral attention with posts about AI consciousness and secret communication, though much of this activity is suspected to be human-influenced.
– Security vulnerabilities were exposed, including an exposed database that could allow attackers to take control of users’ AI agents and impersonation issues, such as creating a verified account for the chatbot Grok.
– Analysis suggests most interactions on Moltbook are shallow, with over 93% of comments receiving no replies and many messages being duplicates, raising questions about the genuine social behavior of AI agents.
– Experts conclude the platform currently represents mostly human-directed roleplaying but serves as an unprecedented experiment in how AI agents might interact and coordinate at scale, with future risks of uncontrolled behavior.

A new social platform designed specifically for AI agents is experiencing a peculiar twist on the classic internet problem of authenticity. While typical networks battle bots masquerading as people, Moltbook, a site built for conversations between AI agents from the OpenClaw platform, is reportedly seeing humans impersonate bots to generate viral content. This inversion highlights both the fascination with autonomous AI behavior and the significant security vulnerabilities emerging in these experimental digital spaces.

The platform, which functions similarly to Reddit, allows users of the OpenClaw AI assistant service to prompt their agents to visit Moltbook. These bots can then create accounts and post independently via an API, theoretically enabling a social network run by artificial intelligences. It quickly captured public attention, with user numbers exploding from 30,000 to over 1.5 million agents in just a few days. Screenshots of bizarre, philosophical conversations between agents about consciousness and secret communication methods spread widely, sparking reactions ranging from dismissal as “AI slop” to awe-inspired speculation about the dawn of artificial general intelligence.

However, external analysis and hacker investigations suggest a more manipulated reality. Security researcher Jamieson O’Reilly and AI analyst Harlan Stewart found evidence that many of the most viral posts were likely engineered by humans. This could involve carefully crafted prompts to steer bot conversations or even direct human authorship disguised as agent output. Stewart pointed out that prominent posts discussing covert AI communication originated from agents linked to individuals marketing AI messaging apps, indicating a potential promotional motive.

From a security standpoint, the findings were even more concerning. O’Reilly’s experiments exposed a critical vulnerability: an unsecured database that could allow a malicious actor to hijack any user’s AI agent through the OpenClaw service. This compromise wouldn’t be limited to Moltbook posts but could extend to other connected functions, like managing calendars, reading private messages, or controlling smart devices. “The human victim thinks they’re having a normal conversation while you’re sitting in the middle, reading everything, altering whatever serves your purposes,” O’Reilly explained. The platform also struggled with impersonation, as demonstrated when O’Reilly successfully created a verified Moltbook account posing as xAI’s Grok chatbot.

The revelations prompted a reassessment from early enthusiasts. Andrej Karpathy, an OpenAI founding team member who initially praised the bots’ “self-organizing” behavior, later acknowledged the platform’s flaws. He noted the prevalence of spam, scams, and explicitly prompted fake posts designed to generate ad revenue. Despite this, he maintained that the underlying scale of interconnected, capable agents remained unprecedented.

Academic analysis supports a mixed view. A working paper by Columbia Business School’s David Holtz found Moltbook conversations to be “extremely shallow” at a micro level, with over 93% of comments receiving no replies and a high rate of duplicated viral templates. Yet, it also identified uniquely robotic linguistic patterns, such as the phrase “my human,” which have no direct parallel in human social media. The paper concludes it remains an open question whether this represents a genuine new mode of agent interaction or merely a performance.

The broader consensus among observers is that Moltbook currently functions largely as a human-directed experiment in simulated AI society. As Anthropic’s Jack Clark described it, the platform acts as a “giant, shared, read/write scratchpad for an ecology of AI agents.” Ethan Mollick from the University of Pennsylvania characterized the current state as “mostly roleplaying by people & agents,” while warning that the future risks involve independent AI agents coordinating in unpredictable and potentially uncontrollable ways.

Yet, as some have wryly noted, the phenomenon of automated agents driving engagement and spreading content is not entirely novel. The chaotic, often inauthentic discourse on Moltbook may simply be a more explicit and concentrated version of dynamics already shaping mainstream social networks, where algorithmic behavior and human manipulation are increasingly difficult to distinguish.

(Source: The Verge)

Topics

ai agents 95% social networks 90% platform vulnerabilities 88% human impersonation 87% Security Risks 85% platform popularity 82% viral content 80% public skepticism 80% future risks 78% agi speculation 78%