AI & TechArtificial IntelligenceCybersecurityNewswireTechnology

Anthropic’s Claude Security scans code for flaws and prioritizes fixes

▼ Summary

– Anthropic announced Claude Security, a defensive cybersecurity product in public beta for Enterprise-tier users, that uses the Claude Opus 4.7 model to scan codebases for vulnerabilities and generate targeted patches.
– Project Glasswing, an initiative using the powerful Mythos model kept from public release, aims to find vulnerabilities in critical open-source software with partners like Apple, Google, and Microsoft.
– Claude Security performs larger-scale scans of full repositories, tracing data flows and component interactions like a security researcher, and includes safeguards in Opus 4.7 to block malicious uses like ransomware development.
– The tool uses a multi-stage validation pipeline to verify findings, providing confidence ratings, severity, reproduction steps, and recommended fixes to help developers prioritize high-impact vulnerabilities.
– Claude Security integrates with technology partners like CrowdStrike and Palo Alto Networks, and security partners like Accenture and Deloitte, to strengthen enterprise security postures.

Anthropic has officially launched Claude Security, a defensive cybersecurity product now available in public beta for Enterprise-tier Claude users. The company says it will “soon” expand access to Claude Team and Max-tier subscribers.

This new offering fits squarely within Anthropic’s growing cyberdefense portfolio. Claude Security enables security teams to scan codebases for vulnerabilities and generate targeted patches using the Claude Opus 4.7 model. The goal is straightforward: help defenders find and fix flaws before attackers can exploit them.

Earlier this month, Anthropic also unveiled Project Glasswing, an initiative described as an AI Manhattan Project aimed at uncovering vulnerabilities in the world’s open-source software infrastructure. Glasswing relies on a model called Mythos, which is considered so dangerous that Anthropic is not releasing it publicly. Instead, it shares Mythos only with participants that include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, Nvidia, and Palo Alto Networks.

Vulnerability scanning sits at the heart of both Project Glasswing and Claude Security. Most cyberattacks begin when an adversary finds and exploits a weakness. If defenders can patch those weaknesses first, the attack surface shrinks dramatically. Think of the Death Star’s exhaust port , one critical flaw that, once discovered, led to its destruction. Your codebase likely has more than one such flaw. Claude Security aims to find them before attackers do.

Everything runs on software, and software is inherently vulnerable. Unpatched vulnerabilities open doors for adversaries and can also cause bugs that frustrate users. I first experimented with AI vulnerability scanning back in September using OpenAI’s Codex, but it failed because it couldn’t handle project-wide context. Pairing it with ChatGPT’s Deep Research helped uncover critical flaws in my security software, which I fixed immediately. Since then, both Codex and Claude Code have improved their context capacity, but neither can process an entire large codebase at once.

Mythos can. It handles relationships between codebases on a macro scale, but it remains off-limits to the public , even for Enterprise-tier customers. OpenAI recently introduced Codex Security, which offers broader context analysis. Now Claude Security brings similar large-scale scanning capabilities.

Claude Security can scan a full repository or a targeted directory. According to Anthropic, “Claude reasons about code the way a security researcher does, tracing data flows, reading source code, and working out how components interact across files and modules.”

But there is a catch: vulnerability scanners help defenders, but they also help attackers. That was the lesson of the Death Star , once the Rebels knew the weakness, they exploited it. Both Microsoft and OpenAI have reported that state-affiliated actors from China, Iran, Russia, and North Korea have used large language models to research companies, debug code, generate scripts, and craft phishing content.

Anthropic is working to prevent its models from being misused. With the launch of Opus 4.7, the company introduced new cyber safeguards that automatically detect and block requests suggestive of prohibited or high-risk cybersecurity uses. For example, Opus 4.7 now blocks “activities that are almost always used maliciously and have little to no legitimate defensive application such as mass data exfiltration or ransomware code development.”

What about activities with legitimate defensive uses, like vulnerability exploitation or offensive security tooling? Opus 4.7 blocks those too , unless the user is approved for Anthropic’s Cyber Verification Program. Researchers with this clearance can use Opus 4.7 to perform blocked security activities as part of their work. (Disclosure: I am an authorized member of this program.)

The real challenge with vulnerability scanning is noise. Every minor issue can get flagged, wasting hours on low-impact bugs while critical flaws go unpatched. Claude Security addresses this with a multi-stage validation pipeline that independently verifies each finding before it reaches an analyst. Every result gets a confidence rating.

The AI explains each finding in detail, including confidence level, severity, likely impact, reproduction steps, and recommended fix. This helps developers prioritize high-confidence, high-impact issues first, without chasing false alarms.

From these findings, defenders can open the code in Claude Code, in context, and modify the areas needing work directly. Anthropic has also added scheduled scans for ongoing coverage, the ability to dismiss findings with documented reasons (so future reviewers can trust prior triage decisions), and CSV and Markdown export for integrating findings into existing tracking and audit systems.

Claude Security subscribers can work with technology and security partners. Anthropic specifically named technology partners including CrowdStrike, Palo Alto Networks, SentinelOne, TrendAI, and Wiz, which are integrating Opus 4.7 into their platforms. Security partners including Accenture, BCG, Deloitte, Infosys, and PwC are deploying Claude Security to help enterprises strengthen their security posture.

Do you see AI vulnerability scanning as more useful for finding dangerous flaws or for helping developers prioritize fixes faster? Let us know in the comments below.

(Source: ZDNet)

Topics

ai vulnerability scanning 95% claude security 93% project glasswing 88% cyberattack prevention 85% ai dual-use risks 82% state-affiliated cyber threats 78% anthropic cyber verification 75% opus 4.7 safeguards 72% vulnerability prioritization 70% developer workflow integration 68%