I caught myself doing it last week. A teammate opened a PR, I scanned the diff, recognized familiar patterns, saw clean formatting, and left an “LGTM” in under three minutes. The code looked right. It sounded right. I moved on.
Later I realized I hadn’t actually understood what the code did. I’d reviewed the shape of it, not the substance. And I’m pretty sure the AI that wrote most of it would have passed any visual scan I threw at it.
The LGTM trap
We’re writing more code than ever thanks to AI. The output is clean, well-structured, follows conventions. It looks like code a senior developer would write. And that’s exactly the problem.
AI-generated code is optimized to look reviewable. Consistent naming. Proper abstractions. Reasonable patterns. Your brain pattern-matches it as “good” before you’ve even processed what it does. So you approve it. You move on. You trust the machine.
Async code review was already struggling before AI entered the picture. Long feedback loops, context-switching between your own work and someone else’s PR, drive-by comments that miss the bigger picture. Now add AI-generated code that’s harder to scrutinize because it looks so damn competent, and the whole process starts falling apart.
Pair code review
I’ve been thinking about a different approach: pair code review.
Instead of the usual async ping-pong in PR comments, the author walks a teammate through the changes on a quick call. The author drives. They decide which parts need deep attention and which are trivial AI-generated boilerplate not worth spending time on. The reviewer asks questions in real time.
Here’s the thing. You can’t walk someone through code you don’t understand. The moment you have to explain your solution out loud, you’re forced to actually comprehend it. Every line. Every decision. Whether you wrote it by hand or whether AI generated 100% of it doesn’t matter. If you can’t explain it, you don’t own it.
That’s the accountability mechanism async review lost somewhere along the way.
The compounding benefits
Instant feedback. No waiting hours for a comment, then hours for the response, then another round of comments. Fifteen to thirty minutes and you’re done.
Real mentoring. Juniors don’t get a list of dry suggestions to implement blindly. They get a conversation. They hear how a senior thinks about the problem, why certain tradeoffs were made, what alternatives were considered. That’s worth more than a hundred inline comments.
Shared understanding. The whole team builds a mental model of the codebase together, instead of each person only knowing the parts they wrote. When AI is generating large chunks of code, this shared context becomes critical. Someone on the team needs to understand what’s going into production.
The time objection
Yes, this costs time. Thirty minutes of synchronous attention from two people is expensive.
But if we’re honest about how much time AI saves during development, reinvesting some of that into deeper review seems like a fair trade. You’re not adding overhead to the old process. You’re redirecting time that AI freed up toward the part of the process that’s breaking down.
This isn’t meant to be mandatory for every PR. It’s a tool. Especially useful for complex changes, critical paths, or anything where the author used AI heavily and wants to make sure they actually understand what they’re shipping.
The worst code review is the one where nobody understood the code. Including the person who submitted it.