Industry News Details
Anthropic launches review tool aimed at fixing bugs in AI-generated code. Posted on : Mar 10 - 2026
Peer feedback has long been a cornerstone of software development, helping teams catch bugs early, maintain consistency across a codebase, and improve overall code quality.
But the rise of “vibe coding” — where developers use AI tools that generate large amounts of code from simple natural-language instructions — is reshaping the development process. While these tools can dramatically accelerate coding, they also introduce new challenges, including hidden bugs, security vulnerabilities, and code that developers may not fully understand.
To address this, Anthropic has introduced an AI-powered reviewer designed to identify issues before code is merged into a project’s codebase. The new feature, called Code Review, launched Monday within Claude Code.
“We’ve seen a lot of growth in Claude Code, especially within enterprises,” said Cat Wu, Anthropic’s head of product. “One of the most common questions from enterprise leaders is: If Claude Code is generating a large number of pull requests, how can we review them efficiently?”
Pull requests allow developers to submit code changes for evaluation before they are merged into the main codebase. According to Wu, the increased code output generated by AI tools has created a surge in pull requests, often leading to bottlenecks in the review process.
“Code Review is our answer to that,” she said.
The feature is initially launching in research preview for Claude for Teams and Claude for Enterprise customers.
Its debut comes at a significant moment for Anthropic. On the same day, the company filed two lawsuits against the U.S. Department of Defense after being labeled a supply chain risk. As that dispute unfolds, Anthropic is likely to rely more heavily on its rapidly growing enterprise business. The company says enterprise subscriptions have quadrupled since the start of the year, and Claude Code’s run-rate revenue has surpassed $2.5 billion since launch.
Wu said the new tool is designed primarily for large enterprise customers such as Uber, Salesforce, and Accenture, which already use Claude Code and need help managing the growing number of AI-generated pull requests.
Once enabled, Code Review can run automatically for every engineer on a team. The tool integrates with GitHub, analyzes pull requests, and posts comments directly on the code highlighting potential problems and suggesting fixes.
Unlike some automated review tools, Anthropic’s system prioritizes logical errors rather than stylistic issues.
“A lot of developers get frustrated with automated feedback that isn’t immediately actionable,” Wu said. “So we chose to focus specifically on logic errors — the issues that matter most.”
The AI reviewer explains its findings step by step, outlining the potential problem, why it matters, and how it might be fixed. Issues are also categorized by severity using color labels: red for critical problems, yellow for possible concerns that require review, and purple for issues tied to older or existing code.
Under the hood, the system relies on multiple AI agents working in parallel. Each agent examines the codebase from a different perspective, and a final agent aggregates the results, removes duplicate findings, and ranks issues by importance.
The tool also performs basic security checks, and engineering teams can configure additional rules based on their internal standards. For deeper security analysis, Anthropic offers a separate product called Claude Code Security.
Because the system uses a multi-agent architecture, reviews can be resource-intensive. Pricing is token-based, similar to other AI services, with costs varying based on the complexity of the code. Wu estimates that each review typically costs between $15 and $25.
Despite the cost, Wu believes the tool will become essential as AI-driven development accelerates.
“There’s been an enormous amount of market demand for this,” she said. “As Claude Code makes it easier to build new features, teams are seeing a much greater need for code review. Our goal is to help enterprises ship software faster — and with fewer bugs.”