Subscribe to our Newsletter
The San Francisco Frontier | Est. 2025
© 2026 dpi Media Group. All rights reserved.

Anthropic's New AI Code Reviewer Is Here to Save You From Your AI-Generated Code Mess

Coding

Remember when coding was slower but at least you could follow what was happening? Yeah, those days are kind of over. AI tools like Claude Code have completely transformed how developers work, letting them generate massive amounts of code just by describing what they want in plain language. It’s fast, it’s efficient, and it’s also introduced a whole new problem: way too many bugs, security risks, and code that nobody really understands.

Enter Anthropic’s latest solution: Code Review, a new AI-powered tool designed to catch those problems before they actually make it into your software. The tool officially launched this week for Claude for Teams and Claude for Enterprise customers, and it’s addressing a real pain point that’s been building up as AI-generated code floods development workflows.

“We’ve seen massive growth in Claude Code, especially with enterprise customers, and everyone keeps asking us the same thing: now that we’re generating tons of pull requests, how do we actually review all of this stuff efficiently?” Cat Wu, Anthropic’s head of product, explained. Basically, the sheer volume of code being generated has created a massive bottleneck in the review process, and Code Review is their answer to that problem.

Here’s how it works: once enabled, Code Review integrates directly with GitHub and automatically analyzes every pull request your team submits. It leaves detailed comments on the code explaining potential issues and suggesting fixes. The focus here is specifically on logical errors rather than style nitpicks, which is honestly refreshing. Wu noted that developers get annoyed when AI feedback isn’t immediately useful, so they decided to focus purely on catching the things that actually matter.

The tool uses a multi-agent approach, meaning multiple AI agents examine your code from different angles simultaneously, then a final agent ranks everything by priority and removes duplicates. Issues get labeled by severity: red for the most critical problems, yellow for things worth reviewing, and purple for issues related to existing code or historical bugs.

Yeah, there’s a catch though, this isn’t cheap. Since it relies on multiple agents working in parallel, the tool is resource-intensive. Pricing is token-based and varies depending on code complexity, but Wu estimates each review will run you between $15 and $25 on average. It’s definitely a premium experience, but Anthropic argues it’s necessary as AI tools generate more and more code.

The timing here is interesting. Anthropic just filed two lawsuits against the Department of Defense over being designated a supply chain risk, which probably means they’re leaning harder into their enterprise business. That business has been booming, Claude Code subscriptions have quadrupled since the beginning of the year, and the tool’s run-rate revenue has already surpassed $2.5 billion. Companies like Uber, Salesforce, and Accenture are already using Claude Code and clearly need better ways to manage the pull requests flooding in.

As AI keeps reshaping how we build software, tools like Code Review are becoming less of a luxury and more of a necessity. Whether $15 to $25 per review is worth it for your team is another question entirely.

AUTHOR: tgc

SOURCE: TechCrunch