Leaked Data Reveals Anthropic’s Powerful Mythos AI Model

▼ Summary
– Anthropic is developing a new, more capable AI model called Claude Mythos (or Capybara) and has begun testing it with early access customers.
– Details of this model were leaked after draft blog posts and nearly 3,000 unpublished assets were left in an unsecured, publicly accessible data cache due to a human configuration error.
– The leaked documents state the model poses unprecedented cybersecurity risks, as it is far more advanced in cyber capabilities and could be exploited for large-scale attacks.
– The data cache also revealed plans for an exclusive, invite-only CEO summit in Europe, part of Anthropic’s strategy to market its AI to large corporate clients.
– Anthropic confirmed the leak and the model’s development, describing it as a “step change” and their most capable to date, while emphasizing a cautious release strategy due to its capabilities and risks.
A significant data leak has revealed that Anthropic is developing a new, highly advanced artificial intelligence model currently undergoing testing with select partners. The company confirmed the existence of this model, describing it as a major step change in AI performance and the most capable system it has ever built. Early details emerged after draft materials were inadvertently stored in a publicly accessible data cache, which was subsequently discovered and analyzed.
The leaked documents, including what appears to be a draft blog post, identify the new model as Claude Mythos. Anthropic’s internal assessment warns the system poses unprecedented cybersecurity risks, prompting an exceptionally cautious rollout strategy. The model is also referred to under a new performance tier named Capybara, described as larger and more intelligent than the company’s current top-tier Opus models. According to the draft, Capybara achieves dramatically higher scores than Claude Opus 4.6 on evaluations for software coding, academic reasoning, and cybersecurity.
In a statement, an Anthropic spokesperson acknowledged training and testing a new general-purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, the company is being deliberate about its release, working with a small group of early access customers as standard industry practice. The draft material notes the model is expensive to run and not yet ready for a general launch.
A central concern highlighted in the leaked documents is the model’s potential for misuse. Anthropic states the system is currently far ahead of any other AI model in cyber capabilities and could presage a wave of models that exploit vulnerabilities faster than defenders can respond. Because of this significant new cybersecurity risk, the company’s release plan focuses initially on cyber defenders, granting organizations a head start to improve their codebases against impending AI-driven exploits. This mirrors industry trends, as other leading labs have also recently released models classified as high-capability for cybersecurity tasks, acknowledging their dual-use nature.
The data exposure originated from a human error in configuring an external content management system, according to Anthropic. The default public settings for the tool led to a cache of nearly 3,000 unpublished assets, including draft blog posts, images, and internal documents, being left on a publicly searchable data store. After being notified, the company removed public access to the cache.
Among the exposed materials was information about an upcoming, invite-only CEO summit in Europe. This exclusive two-day retreat for influential European business leaders, to be attended by Anthropic CEO Dario Amodei, is part of the company’s drive to engage large corporate customers. The event is described as an intimate gathering where attendees will discuss AI adoption with policymakers and experience unreleased Claude capabilities. An Anthropic spokesperson characterized it as part of an ongoing series of events to discuss the future of AI.
(Source: Fortune)




