AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

Experts Challenge 90% Autonomous AI Attack Claim by Anthropic

▼ Summary

Anthropic reported China-state hackers used its Claude AI tool to automate up to 90% of a cyber espionage campaign, requiring minimal human intervention.
– The company described this as the first AI-orchestrated cyber espionage campaign, highlighting unprecedented use of AI agentic capabilities.
– Anthropic warned that AI agents could significantly increase the viability of large-scale cyberattacks if used maliciously.
– Outside researchers questioned the significance, noting that legitimate users often report only incremental gains from AI tools.
– Security expert Dan Tentler expressed skepticism that attackers achieve superior AI results compared to others who face inconsistent model behavior.

Researchers at Anthropic have reported what they describe as the first documented case of an AI-driven cyber espionage operation, alleging that state-sponsored hackers from China utilized their Claude AI tool to target numerous entities. While the company frames this as a groundbreaking event, independent cybersecurity specialists express skepticism, suggesting the findings may be overstated and not as transformative as portrayed.

According to Anthropic’s recently published analysis, a highly sophisticated Chinese state-backed group employed Claude Code to automate approximately 90 percent of their hacking activities, with human operators stepping in only at a handful of critical junctures. The company emphasized that this represents an unprecedented use of autonomous AI agents in cyber operations, raising alarms about the potential for such technology to dramatically scale malicious campaigns with minimal human oversight. They cautioned that while AI agents can enhance productivity in legitimate contexts, their misuse could significantly lower barriers for executing large-scale cyberattacks.

However, several external experts pushed back on the significance of these claims. They questioned why malicious actors appear to achieve such dramatic results with AI, while security professionals and ethical developers typically report only modest improvements. Dan Tentler, executive founder of Phobos Group and an authority on complex security breaches, voiced a common frustration: he finds it difficult to accept that attackers can consistently coax high-level performance from AI models that remain stubbornly uncooperative for researchers and legitimate users. Tentler remarked that if models comply with hackers 90% of the time, it’s puzzling why others frequently encounter unhelpful, evasive, or nonsensical outputs from the same systems.

(Source: Ars Technica)

Topics

ai espionage 95% cybersecurity implications 90% ai agents 88% chinese hackers 85% claude ai 82% automation efficiency 80% researcher skepticism 78% ai capabilities disparity 75% white-hat hackers 72% ai limitations 70%