Artificial IntelligenceBigTech CompaniesCybersecurityNewswire

Anthropic exposed unreleased AI model in unsecured database

Originally published on: March 28, 2026
▼ Summary

– Anthropic exposed nearly 3,000 unpublished assets, including draft blog content and internal files, due to a misconfigured content management system that made files public by default.
– The leaked data contained sensitive information, such as details of an unreleased, advanced AI model and an upcoming invite-only CEO retreat in Europe.
– The company attributed the security lapse to “human error in the CMS configuration” and stated it was unrelated to its own AI tools like Claude.
– While Anthropic downplayed the significance, calling the materials early drafts, cybersecurity researchers confirmed the data was publicly accessible to anyone with technical knowledge.
– This incident mirrors similar leaks at other tech firms like Apple and Google, but the proliferation of AI coding tools may now make such exposed data easier to discover.

A significant security oversight at Anthropic has led to the unintended exposure of sensitive internal data, including details about an unreleased AI model and a private CEO event. The incident, which involved nearly 3,000 unpublished assets, stemmed from a misconfigured content management system that left draft materials publicly accessible without requiring any login. Cybersecurity experts reviewing the situation confirmed the data was stored in a public-facing system where files were accessible by default unless explicitly restricted, a configuration error the company attributes to human error.

The exposed cache included a wide array of materials, from draft blog post images and logos to more consequential documents. Among these were specifics about upcoming product announcements, notably information on a new, highly capable AI model described internally as the most advanced the company has trained. After being notified, Anthropic secured the data and acknowledged it is indeed testing a next-generation model with early access customers, calling it a substantial leap forward with major improvements in reasoning, coding, and cybersecurity capabilities.

Further sensitive information pertained to an exclusive, invite-only retreat in the U. K. for CEOs of major European companies, which Anthropic’s CEO Dario Amodei is scheduled to attend. The company characterized this as part of a routine series of executive events. The trove also contained internal images, including one referencing an employee’s parental leave, highlighting the personal nature of some exposed assets.

Anthropic has moved to downplay the incident’s severity, stating the materials were early drafts and did not involve core infrastructure, AI systems, or customer data. A company spokesperson emphasized the lapse was unrelated to Claude or any Anthropic AI tools, explicitly distancing the event from the growing trend of technical failures linked to AI-generated code or autonomous AI agents. This is notable given Anthropic’s public reliance on its own Claude-based AI coding agents for internal software development.

This type of pre-release data exposure is not unprecedented in the tech industry. Companies like Apple have experienced similar leaks, such as inadvertently revealing iPhone names via a public sitemap or leaving debugging files active in a redesigned App Store. Gaming giants including Epic Games and Nintendo have also seen assets leak through misconfigured content delivery networks or staging servers. Even Google and Tesla have faced incidents where internal documentation or vehicle data was exposed through improperly secured third-party systems.

However, the landscape for discovering such leaks is evolving rapidly. The proliferation of AI coding tools like Claude Code has lowered the barrier for identifying exposed data. These tools can automate the process of crawling public systems, detecting patterns, and correlating assets, enabling the swift discovery of content that might otherwise go unnoticed. They can generate scripts to scan entire datasets, efficiently uncovering file naming conventions or structural clues that a manual review could miss. While not the cause in this specific case, such AI-powered capabilities likely amplify the risk and impact of configuration errors across the industry.

(Source: Fortune)

Topics

data security lapse 98% cms misconfiguration 95% upcoming ai model 93% cybersecurity research 90% internal data exposure 88% ceo event leak 85% human error 82% ai coding tools 80% tech company leaks 78% public data lake 75%