The State of Code Reviews: Modern Code Review Approaches and Tools
Aaron Beals
·
Chief Technical Officer
June 10, 2025

Ever since Michael Fagan wrote about design and code inspections in the mid-70’s, software developers have been reviewing each other’s code. Yes, partially out of curiosity, but we do this mainly to improve software quality. What started out as a way to catch bugs and maintain code quality has evolved into standard operating procedure—at this point, having a code-review team culture is considered best practice.

In modern code reviews, top software development teams don’t just catch issues, but identify opportunities for improvement, enforce adherence to policies, share knowledge, and help new developers get up to speed.

However, in the last five years, we engineering leaders have seen distributed teams and generative AI, respectively, upend our approach to code reviews and put increased pressure on code complexity. Today's engineering leaders need to rethink code review strategies in order to balance quality, velocity, and team morale.

In this post, we'll cover how modern code reviews have evolved to address these challenges, tools that have helped with that evolution, and how engineering leaders should measure code review effectiveness.

The Evolving Purpose of Code Reviews

With all due respect to Fagan, most teams do informal code reviews, not formal ones. In these lightweight code reviews, we're still finding issues, but the reviews walk the pragmatic line between code quality and team velocity, and have evolved over time to support additional benefits. 

Whereas early code reviews were focused on catching bugs and maintaining code quality, modern code review objectives include:

  • Finding and addressing defects
  • Improving code readability understandability
  • Identifying opportunities for improvement (better approaches to solving the problem)
  • Knowledge sharing across development teams
  • Ensuring adherence to security and privacy standards
  • Maintaining architectural consistency
  • More thorough test coverage
  • Getting new developers up to speed and mentoring junior devs

Much of what early code reviews caught (e.g. syntax and formatting issues, documentation issues, test coverage) can now be handled by tools in your CI/CD pipeline (we'll talk tools shortly). But many of the above benefits are gleaned through human review. 

However, software developers tend to be busy, and thoroughness and attention to detail in code reviews often take a hit when pressure is on to deliver new features quickly. And with the introduction of AI coding assistants, developers have been generating more code, more quickly. A higher volume of code means more time spent performing code reviews.

Common Challenges in Modern Code Reviews

This ever-present tension between thorough code reviews and development velocity has taken on a new dimension in the last two years. From Github Copilot to Cursor to Tabnine to Claude Code, developers are adopting AI assistants to help them write code, often with a mandate from companies as part of their AI adoption strategy. And leadership—often non-technical leadership—expects that generative AI will increase development output, so developers are incentivized to heavily leverage AI coding assistants.

These coding assistants generate more code, more quickly than a human developer would alone. As a result, the burden of code review is growing more heavy, leaving the developers with less time to write code themselves—or augment what comes from generative AI.

The code generated also tends to be more complex, introduces dependencies without checking first with the developer, and makes mistakes such as violating directives in the context. All of which puts greater pressure on the code reviews, sometimes requiring senior and staff engineers to get involved.

I've had several conversations in the last few weeks that back this up:

  • One with an experienced engineer who has been exploring the "vibe coding" approach to building applications and has found that despite instructions to the contrary, the AI assistant kept injecting unwanted frontend frameworks
  • Another with a lead whose team has been using generative AI to assist with a refactor, and has been moving so fast that they've been more focused on reviewing tests than the code itself

Layer all of this on top of the highly-distributed nature of our development teams over the last few years, and engineering leaders have had to adapt their code review process.

Modern Code Review Approaches

There have been distributed software development teams since we figured out how to send code over networks, but the last five years have seen the model expand to upwards of 80%!

I'm a big proponent of distributed software development teams for a number of reasons (fodder for a different post), but it's pretty well-understood that time zones do have an impact on communication patterns. Github introduced PRs in 2008-09, but they were largely used by open source contributors—these folks were almost always distributed. But as software development teams started to work remotely, they began to shift from synchronous to asynchronous code reviews. Whether via sent diffs, pushed feature branches, or used PRs, remote team code reviews became an established approach.

What's also happened recently is a push to shorter change lead time and increased deployment frequency. You hopefully recognize these as two of the four DORA key software delivery metrics. As teams pursue these, they have to adopt a combination of process and tools and keep up with the increased velocity.

On the process side, teams have taken a few approaches: 

  • Segmenting and tiering code reviews based on risk and complexity so scarcer team resources (e.g. staff and principal engineers, security engineers) are only pulled in when needed.
  • Leaning more on tools to "pre-review" code changes, to cut down on (and focus) human time in the review. We'll talk tools more in a moment.
  • Using a "just enough" review philosophy, in which the reviewer prioritizes areas of focus based on the team's needs: e.g. correctness, security, performance, architectural fit
  • Leveraging multiple reviewers, each tackling one of the areas of focus

Tools Transforming Code Reviews in 2025

Most good process changes involve tools, and code review is no exception. 

During the development phase, devs are using chat and agents for AI-augmented "rubber ducking". This is—in my view—a quasi-replacement for some of the benefits of pair programming. It has the unfortunate downside of reducing benefits for junior developers, but that's another topic for a different post. This gives the engineer the kind of as-we-develop code reviews that cut down on painful adjustments late in the cycle.

As I mentioned above, there are a swath of tools that developers now use client-side and in pre-commit hooks in CI/CD pipelines to catch issues before a human code review. These include:

  • Formatters (e.g. Prettier, Black, ClangFormat)
  • Linters (e.g. ESLint, Pylint, SonarLint, Checkstyle)
  • Static Analysis Tools (e.g. Bandit, Cppcheck, Semgrep, SonarQube)
  • Security scanners (e.g. Snyk)

There are also newer AI-powered code review tools, focused on the individual developer. These should not replace human developers because of the benefits of team code review—I cannot stress this enough—but they do cut down on the time required to prep for and perform code reviews. Tools include dedicated ones like Qodo Merge as well components within larger platforms such as Swimm, CodeRabbit, and LinearB. Many of these can integrate seamlessly into your CI/CD pipeline and give individual developers quick feedback.

What's largely missing are team-wide code understanding tools aimed at engineering leaders.

Measuring Code Review Effectiveness

So what can engineering leaders do to understand how effective their team's code reviews are? I recommend looking at the outputs first. Go back to the four DORA keys:

  • Throughput
    • Change lead time
    • Deployment frequency
  • Stability
    • Change fail percentage
    • Failed deployment recovery time

Yes, these are lagging indicators, but as an engineering leader, they're critical to keep in front of you. A spike in the wrong direction—particularly for deployment frequency and change fail percentage—could indicate that you have issues with your code review process.

It's equally important to understand your code quality and the effectiveness of code reviews toward that end. As an engineering leader, you don't have time to look at individual code reviews, nor do you want to blindly trust AI code review tools looking at each PR individually.

Recommendations for Engineering Leaders

Instead, you need a tool that helps you understand the output of code reviews in the aggregate. You need to keep track of changes in code quality over time. You need to understand shifts in code review latency and how they may be related to code complexity or team dynamics. In the worst case, you can use these to identify cases where your team's code review process is breaking down. But you can also use a tool like this to help your team focus their limited energies on the parts of code review that matter most to you. You can use the feedback cycle from a tool like this to improve your code review process.

Ideally, this is all part of a culture of constructive feedback that you are fostering as an engineering leader. The more we remind our teams of why we do reviews—keeping them aligned with our goals—the better their code reviews will be, and the more sound their judgment as they continually find the balance between automation and human judgment in code reviews.

In Conclusion…

Code reviews have evolved from simple bug detection to a vital practice for quality, knowledge sharing, and security. While tools handle basic checks, human insight remains crucial, even as distributed teams and AI-generated code increase complexity and volume. The future of code reviews lies in smart, balanced approaches that leverage automation to pre-review, allowing human reviewers to focus on critical aspects. Engineering leaders, enhance your team's code understanding and drive efficient, high-quality reviews with Flux. Explore Flux today to transform your code review strategy and foster a culture of excellence.

Aaron Beals
Chief Technical Officer
About
Aaron

Aaron Beals is the CTO of Flux, where he assembles high-performing engineering teams and translates business needs into innovative solutions. At Flux, Aaron combines his experience building scalable, secure enterprise-class SaaS apps with cutting-edge AI models to deliver new products to support engineering managers like himself.

See Flux in action
Ready to try it? Request your demo of Flux today and start to claw back control of your team's code.
About Flux
Flux is more than a static analysis tool - it empowers engineering leaders to triage, interrogate, and understand their team's codebase. Connect with us to learn more about what Flux can do for you, and stay in Flux with our latest info, resources, and blog posts.