AI & TechArtificial IntelligenceBigTech CompaniesCybersecurityNewswire

Report: Unauthorized Access to Anthropic’s Mythos Cyber Tool

▼ Summary

– An unauthorized group has gained access to Anthropic’s cybersecurity AI tool, Mythos, through a third-party vendor.
– Anthropic is investigating but states there is no evidence the breach impacted its own internal systems.
– The group accessed the tool on its announcement day by guessing its online location based on Anthropic’s known model formats.
– Members are part of a Discord channel focused on unreleased AI models and have used Mythos regularly, providing evidence to Bloomberg.
– Mythos was released in a limited preview to select vendors like Apple to prevent misuse, as it could be weaponized by bad actors.

A cybersecurity tool designed to protect corporate networks has reportedly been accessed by an unauthorized group, raising concerns about its potential misuse. Anthropic’s Mythos, an AI-powered enterprise security product announced earlier this year, was allegedly compromised through a third-party vendor environment. According to a report, members of a private online forum gained entry to the Claude Mythos Preview, though the company states its own systems remain unaffected.

An Anthropic spokesperson confirmed an investigation is underway into claims of this unauthorized access. The preliminary findings indicate no evidence that the activity has impacted any internal Anthropic infrastructure. The breach appears to have originated from a contractor’s environment, not from Anthropic’s direct systems.

The group reportedly employed several methods to locate and access the model. One strategy involved leveraging the access privileges of an individual employed by an Anthropic contractor, who was interviewed about the incident. Members of this group, who communicate via a Discord channel focused on uncovering details about unreleased AI models, have been using Mythos regularly. They provided evidence to journalists in the form of screenshots and a live software demonstration.

Bloomberg’s report suggests the group gained entry on the very day Mythos was publicly announced. They allegedly made an educated guess about the model’s online location based on their knowledge of Anthropic’s naming conventions for other AI systems. A source familiar with the group’s motives stated their interest lies in experimenting with new models, not in using them for malicious purposes.

Mythos was initially released under a tightly controlled program called Project Glasswing, which included a select roster of vendors such as Apple. This limited release model was a deliberate security measure by Anthropic to prevent the tool from falling into the wrong hands. The company has previously emphasized that while Mythos is built to bolster enterprise security, it could be weaponized against corporate defenses if misused.

This incident presents a significant challenge for Anthropic. The company’s strategy of a restricted, exclusive preview was intended precisely to mitigate security risks and build trust with enterprise clients. Any confirmed breach of the Mythos preview could undermine those efforts, casting doubt on the safeguards surrounding a tool meant to be a cornerstone of corporate cybersecurity.

(Source: TechCrunch)

Topics

unauthorized access 98% mythos tool 97% third-party vendor 92% cybersecurity breach 90% ai model leak 88% project glasswing 85% discord channel 82% bloomberg report 80% enterprise security 78% ai weaponization 76%