Anthropic’s New Tool Reviews AI-Generated Code

▼ Summary
– The rise of AI-powered “vibe coding” has accelerated development but introduced new bugs, security risks, and poorly understood code, increasing the need for efficient code review.
– Anthropic launched an AI tool called Code Review to automatically analyze pull requests, aiming to fix the bottleneck caused by the high volume of code generated by its Claude Code assistant.
– The tool is targeted at large enterprise users, integrates with GitHub, and focuses on catching logical errors by providing step-by-step explanations and severity-labeled suggestions.
– Code Review uses a multi-agent architecture for parallel analysis, is a premium token-based service costing an estimated $15-$25 per review, and is designed to be a resource-intensive product.
– Anthropic’s launch comes as its enterprise business grows, with Claude Code generating significant revenue, and the company faces legal disputes that may increase its reliance on this commercial success.
In the world of software development, peer review is an essential practice for ensuring quality and security. The advent of AI coding assistants, which can rapidly generate large volumes of code from simple instructions, has revolutionized workflows. However, this “vibe coding” approach has also introduced new challenges, including subtle bugs, security vulnerabilities, and code that can be difficult for human teams to understand. To address this growing need, Anthropic has launched a new AI-powered tool designed to automatically analyze and critique AI-generated code before it enters a project’s main branch.
The product, named Code Review, is now available within Claude Code. According to Cat Wu, Anthropic’s head of product, the launch responds directly to feedback from enterprise clients. The dramatic increase in code output from tools like Claude Code has created a bottleneck in the pull request review process, slowing down the overall pace of software delivery. “Code Review is our answer to that,” Wu stated. The tool is initially rolling out to Claude for Teams and Claude for Enterprise customers in a research preview.
This launch coincides with a period of significant growth for Anthropic’s enterprise business, where subscriptions have reportedly quadrupled since the beginning of the year. Claude Code itself has achieved a run-rate revenue surpassing $2.5 billion. Wu emphasized that Code Review is specifically targeted at large-scale enterprise users such as Uber, Salesforce, and Accenture, organizations that are already using Claude Code and now need help managing the sheer volume of pull requests it generates.
Once activated by a development lead, Code Review integrates directly with platforms like GitHub. It automatically analyzes new pull requests, leaving detailed comments directly on the code lines. The AI focuses primarily on identifying logical errors rather than stylistic preferences, a deliberate design choice to provide immediately actionable feedback that developers won’t find annoying. The system explains its reasoning in a step-by-step manner, outlining the perceived issue, why it could be problematic, and suggesting potential fixes.
To prioritize issues, the tool uses a color-coded labeling system: red for high-severity problems, yellow for potential issues worth a second look, and purple for problems related to pre-existing code or historical bugs. Under the hood, the system employs a multi-agent architecture. Multiple AI agents examine the codebase from different perspectives simultaneously, and a final agent aggregates their findings, removes duplicates, and ranks the issues by importance.
While the tool provides a basic security analysis, Wu noted that for deeper scrutiny, users should turn to Anthropic’s separately launched Claude Code Security product. Engineering teams can also customize Code Review to incorporate checks based on their own internal best practices. As a premium, resource-intensive service, pricing is token-based and varies with code complexity, though Wu estimated the average cost per review would fall between $15 and $25.
Wu described the product as a response to intense market demand. As AI accelerates feature development, the need for efficient, high-quality code review has become a critical bottleneck. The company’s goal is that this tool will enable enterprises to build software faster than ever while simultaneously reducing the number of bugs that make it into production.
(Source: TechCrunch)





