Why AI coding tools shift the real bottleneck to review
Blog post from LogRocket
In December 2025, Cursor acquired Graphite for over $290 million, highlighting a growing focus on code review over code generation in software development. As AI coding tools rapidly increase code output, the real challenge emerges in reviewing AI-generated code, which often surfaces more issues than human-written code. A study by CodeRabbit found that AI-written code has 1.7 times more issues, and debugging AI output can take longer than human code. A comparative test of manual coding versus AI-generated code using Anthropic's Claude Code revealed that AI code tends to be more voluminous and defensive, prompting different review questions centered on necessity rather than correctness. This shift requires a new approach in code review processes, especially as AI-generated code, though potentially improving quality, demands more time and scrutiny from senior engineers, who often evaluate the necessity of comprehensive defensive coding. The Cursor-Graphite acquisition underscores the need for adapted review processes to prevent AI from becoming a productivity bottleneck, suggesting that teams must restructure their review practices to maintain efficiency and leverage AI's potential benefits effectively.