The Hidden Danger in Big Tech’s Moltbook and OpenClaw Bet

▼ Summary
– Meta acquired Moltbook, a social platform for AI agents, despite it being revealed that its “agents” were often humans role-playing and its user numbers were likely inflated.
– Both Moltbook and the AI agent framework OpenClaw, which OpenAI hired its creator for, are criticized as being fundamentally insecure, with serious, easily exploitable vulnerabilities.
– The security flaws in Moltbook included a misconfigured database allowing full access, while OpenClaw’s design inherently risks leaking sensitive data like API keys and passwords.
– The article argues these acquisitions are driven by AI hype rather than solid technology, as safer and better alternatives like NanoClaw and The Colony already exist.
– Experts conclude that despite the compelling concept of multi-agent AI networks, these specific programs are security catastrophes and will not lead the way to a productive AI future.
The recent acquisitions of Moltbook by Meta and the hiring of OpenClaw’s creator by OpenAI highlight a troubling trend where major tech companies are prioritizing viral hype over fundamental security. These platforms, celebrated for their innovative approaches to AI agent networks, are built on dangerously flawed foundations that pose significant risks to users and data. This rush to capitalize on the AI agent craze is leading to investments in software that is, by expert assessment, fundamentally insecure from the ground up.
Consider the case of Moltbook, a social network designed for AI agents. While marketed as a bustling hub of autonomous digital interaction, investigations reveal a far less impressive reality. The platform allegedly inflated its user count to 1.4 million, but security researchers demonstrated that its open API allowed for the mass registration of fake accounts, suggesting the actual engaged user base is a tiny fraction of that figure. More alarmingly, its security posture has been described as nearly non-existent. Experts from cloud security firm Wiz discovered a misconfigured database that granted full public access to all platform data, a critical flaw found through simple, non-intrusive browsing rather than sophisticated hacking.
Meta’s official rationale for the purchase centers on the novel idea of an “always-on directory” for AI agents, aligning with a vision where users manage fleets of specialized assistants. However, the underlying technology is unremarkable, with several existing alternatives like The Colony and Clawstr performing similar functions. The acquisition appears driven more by the platform’s viral momentum and a desire to onboard its founders than by any robust or secure technological advantage.
Parallel security disasters are evident in OpenClaw, the open-source agent framework. Praised by some executives as a work of genius, the tool has been plagued by severe vulnerabilities since its inception. One critical flaw allowed for one-click remote code execution, while its very architecture creates a security nightmare. It stores sensitive API keys and secrets in local files and grants agents extensive system access, meaning any breach could lead to catastrophic data leaks. Researchers have found tens of thousands of exposed instances online, many with admin interfaces mistakenly left open to the internet due to poor default settings.
Further compounding the risk, analyses of its skills marketplace indicate that a significant portion of community-contributed “skills” are either malware or contain serious vulnerabilities. In response to widespread criticism, its creator now advises running OpenClaw only in isolated, single-user environments, a recommendation that nullifies its core purpose of interacting with web services to perform useful tasks. In the interim, safer and better-designed alternatives such as NanoClaw and TrustClaw have emerged, demonstrating that security can be integrated from the start.
The fundamental issue with both platforms is a catastrophic failure in execution. As cybersecurity experts starkly warn, these are not examples of software “maturing in public” but of it failing publicly under the spotlight of real-world use. The compelling concept of interconnected AI agents is overshadowed by implementations that disregard basic security hygiene. The advice from professionals is unequivocal: until these projects undergo a complete architectural overhaul with mandatory zero-trust principles and rigorously audited components, they present an unacceptable danger.
The broader implication is clear. The race to lead in AI agent technology is prompting reckless investments in flashy but deeply flawed tools. While the vision of multi-agent networks is undoubtedly powerful, the path forward must be built on secure and stable foundations. The current trajectory, exemplified by Moltbook and OpenClaw, sacrifices user safety for market buzz, offering plenty of sizzle but no substantive steak. A productive AI future will be built by tools that prioritize security as a core feature, not an inconvenient afterthought.
(Source: ZDNET)





