Google’s code review guidelines state that one business day is the maximum time to respond to a review request. Meta ran an A/B experiment across 30,000 engineers nudging reviewers toward faster responses, cutting review time 6.8% and time-to-first-action 9.9%. A survey of 75 practitioners across industry and open-source found that “quick reaction time is of utmost importance.”

Speed matters. But there’s a tension. Multiple studies show that rushing reviews degrades quality and defects creep through. Thorough reviews catch more bugs. Fast reviews improve velocity.

Except the conflict is largely misrepresented. The research suggests response time and review thoroughness aren’t opposites sides. They’re orthogonal concerns that only collide when you ignore cognitive constraints.

Response time isn’t review time Link to heading

To cherry pick on Google’s guideline, it’s one business day to respond and not to complete the review. A reviewer can acknowledge a PR within hours (“I’ll look at this afternoon”) while spending appropriate time actually reviewing. Meta’s experiment measured this explicitly. They tracked both time-to-first-action and total time-in-review as separate metrics, with guard metrics ensuring that “time reviewers spend examining diffs” didn’t decrease even as response times improved.

Engineers rarely complain about a good review. If there’s bikeshedding, then sure they’ll be annoyed but most of the time it’s radio silence that kills them. Because they don’t know if the reviewer saw it, forgot it, or is actively ignoring it. Meta found that the slowest 25% of reviews (P75) correlated with engineer dissatisfaction more strongly than average review time. Like tail latency in distributed systems, the outliers hurt disproportionately.

The 24-hour response window gives authors predictability. They can plan around it. Start the next task, context-switch knowing when to check back, avoid the constant “should I ping?” uncertainty. A fast acknowledgment maintains better queue flow even when thorough review takes longer.

Thoroughness has a floor Link to heading

But you can’t sacrifice review quality for speed. The working memory has limits, and you can’t review effectively past its capacity. When Meta optimized for response time, they tracked “eyeball time” ended up being a guard metric to ensure faster acknowledgments didn’t create rubber-stamp reviews.

What makes responsiveness and thoroughness compatible is size. A 200-line PR can get same-day response and careful attention. The conflict appears when PRs balloon to 500 lines or more. Now the same care per line blows past cognitive limits, making same-day response nearly impossible.

AI strains the balance Link to heading

AI changes the equation. Analysis of 153 million lines shows code churn is projected to double. Security studies are finding that AI-generated snippets contain vulnerabilities. And so, reviews take longer because AI introduces subtle issues like wrong patterns for the problem, or copy-pasted blocks that should’ve been abstracted, or pieces that work in isolation but don’t fit the system. Finding these requires understanding both what the code does and what it should do, which takes more cognitive work per line.

Meanwhile velocity pressure is ramping up. Not from expectations alone, but from economic reality. Little’s Law says if arrival rate increases (since AI enables more PRs, good or bad) but reviewer count stays flat (or shrinks), then the queue length explodes. The thoroughness floor rises just as capacity tightens.

Then the way out isn’t telling everyone to be fast (or do more with less). It’s decomposing and disambiguating work to match the new state by making it obvious who reviews what and capping work-in-progress. AI-infused growth is testing whether review processes were designed around human constraints or whether we just got lucky with manual code staying small. Either way, responsiveness and thoroughness don’t have to trade off—they’re compatible when you architect the system for how reviewers actually work.