Over 40,000 OpenClaw Instances Found Exposed Online

▼ Summary
– Widespread misconfiguration has left over 40,000 OpenClaw AI assistant instances exposed to the public internet, creating a significant security risk.
– These exposed instances are vulnerable, with many exploitable via remote code execution attacks that could allow complete host machine takeover.
– Threat actors are already exploiting these exposures, with hundreds of instances linked to prior breach activity and known vulnerabilities.
– Additional risks include indirect prompt injection attacks and the leaking of API keys, which further amplify the potential impact.
– SecurityScorecard advises users to secure deployments by limiting access, adopting a zero-trust mindset, and rigorously testing the technology before full integration.
A significant number of publicly accessible OpenClaw AI assistant instances have been discovered online, posing a serious security risk. Security researchers at SecurityScorecard identified over 40,000 exposed instances linked to nearly 29,000 unique IP addresses, with the count continuing to grow. This widespread misconfiguration grants potential attackers a dangerous foothold, providing them with the same level of access to connected systems and data that the AI agent itself possesses.
The investigation revealed that exploitation is not merely theoretical but is actively happening. SecurityScorecard linked 549 of the exposed instances to previous breach activity and found 1,493 associated with known vulnerabilities. Alarmingly, nearly two-thirds of all observed deployments are considered vulnerable. The most critical threat comes from remote code execution (RCE) vulnerabilities, which impact 12,812 instances and could allow a malicious actor to seize full control of the underlying host server.
Researchers emphasized that this pattern repeats a common security failing. “The more centralized the access, the more damage a single compromise can cause. What looks like convenience is actually a concentration of risk,” the report stated, drawing parallels to past issues with cloud services and shadow IT. The availability of public exploit code for three high-severity vulnerabilities in OpenClaw makes successful attacks even more probable.
Geographically, most exposed instances are located in China, with the United States and Singapore following. The information services sector appears to be the most affected industry, ahead of technology, manufacturing, and telecommunications companies. Beyond RCE, these AI agents face another subtle threat: indirect prompt injection. This technique involves hiding malicious instructions within a website or message that the agent reads. The system will then obediently execute those commands, often without the owner’s knowledge.
Compounding the problem, some users have inadvertently leaked API keys for third-party services through their OpenClaw control panels. This mistake dramatically expands the potential damage of a breach, giving attackers keys to external platforms and data.
To mitigate these risks, SecurityScorecard provides critical guidance for securing any agentic AI deployment. Organizations must aggressively limit access permissions, granting only what is absolutely necessary and reviewing these privileges frequently. Adopting a zero-trust security model is essential, where every action by an agent or tool is verified, not assumed to be safe. Users must also scrutinize the logic and components their AI relies on and remain vigilant against prompt injection.
Treat every AI agent as a privileged identity capable of causing significant harm if compromised. Jeremy Turner, VP of Threat Intelligence and Research at SecurityScorecard, offered straightforward advice: avoid blindly deploying new AI tools on systems with access to sensitive personal or corporate data. He recommends building in separation and conducting thorough testing to understand the technology’s behavior before placing any trust in it.
(Source: InfoSecurity Magazine)





