The Code Review Nobody Does Right

5 min

Most code reviews I have seen follow the same script: someone opens a PR, a teammate scans for obvious bugs, leaves a few nit comments about naming, approves, and moves on. Maybe fifteen minutes of attention, maybe less. Ship it.

That is not a code review. That is a spell-check with extra steps.

The best code reviews I have been part of — as author and as reviewer — have changed how I think about a problem. They have surfaced a design flaw before it became load-bearing. They have taught me an API I did not know existed. They have started a conversation that reshaped the feature.

The gap between those two experiences is enormous, and almost nobody talks about it.

What Most Reviews Actually Optimize For

Bug detection. That is what most teams think code reviews are for. Catch the off-by-one error, spot the missing null check, flag the SQL injection risk.

Those things matter, and you should catch them. But if bug detection is your primary goal, you have already made a mistake — because bugs are the cheapest thing to find in a code review. Static analysis, tests, and type checkers catch most of them before a human ever looks at the code. The things that automated tools cannot catch are exactly what reviews should focus on.

Design. Legibility. Knowledge transfer. Consistency with how the rest of the team thinks and builds.

These are hard to automate and hard to recover from once they get merged and other things build on top of them.

TIP

Before you open the diff, read the PR description. If there is no description — or if it just says “fixes bug” — ask for one. The context behind a change is often more important than the change itself.

The Three Layers of a Good Review

I think about code reviews in three layers, from surface to deep:

Layer 1 — Correctness. Does this code do what it says it does? Are there edge cases the author missed? Is the error handling reasonable? This is the layer most reviewers spend all their time on.

Layer 2 — Design. Is this the right way to solve the problem? Would a different abstraction make future changes easier? Is this adding complexity that will compound? Does this fit with how the rest of the system is structured? This is where the real leverage is, and where most reviews stop short.

Layer 3 — Knowledge transfer. Does every member of the team understand this change well enough to own it? If the author disappeared tomorrow, could someone else debug it, extend it, delete it safely? If the answer is no, the review is not done.

Most reviews only happen at Layer 1. Good ones reach Layer 2. The best ones treat Layer 3 as a requirement.

How to Actually Leave Useful Comments

There is an art to leaving review comments that improve code without demoralizing the author or creating pointless back-and-forth.

The most useful framing I have found is separating questions from suggestions from blockers.

A question is something you do not understand: “Why did we choose to do X here instead of Y?” It invites explanation and often either teaches you something or makes the author realize they should rethink.

A suggestion is optional: “I wonder if we could simplify this by doing Z — but it works as-is.” No pressure, just thinking out loud.

A blocker is something that needs to change before this ships: “This will break if the list is empty — we need to handle that case.”

When you conflate these — when everything feels like an equally urgent comment — authors do not know what to prioritize and reviewers feel like they did a thorough job when they just nitpicked.

TIP

Label your comments explicitly: [question], [nit], [suggestion], [blocker]. It sounds formal, but it makes the conversation dramatically faster.

The PR That Is Too Big to Review

There is a form of technical debt that lives entirely in your PR process: the enormous pull request that nobody can reasonably review.

Five hundred lines across fifteen files. Changes to the data layer, the API, the UI, and the test suite all in one branch. By the time you have read through it, you have lost the thread of the first file.

These PRs get approved without real review because real review is not humanly possible at that scale. Then they get merged, and everyone learns nothing, and the design decisions baked into those five hundred lines become the unquestioned foundation everything else builds on.

If your PRs regularly exceed 300-400 lines, the problem is not your reviewers — it is your workflow. Break the work into smaller, reviewable units. It forces better design, it makes the review conversation sharper, and it means merges happen faster because the blast radius of any one change is smaller.

Reviews as Team-Building

The best code review culture I have experienced was on a team where reviewers were expected to say what they learned from the PR, not just what they found wrong with it.

That small shift changed everything. It meant the author wrote more explanation. It meant the reviewer engaged more deeply. It meant the conversation in the PR became a record of how the team’s thinking evolved — not just a list of corrections.

Code reviews are the highest-leverage moment you have for building shared standards. Not documentation, not architecture meetings — the actual line-by-line conversation about how you build things together.

Most teams use that moment to check a box. The best ones use it to grow.


You do not need a process overhaul to improve your code reviews. You need reviewers who show up with genuine curiosity instead of a checklist, authors who write context instead of just code, and a shared understanding that the goal is a better codebase — not just a merged PR.

The difference between a good review and a great one is about fifteen minutes of actual thinking. It is almost always worth it.