When you assign a PR to your team instead of a specific person, you’re not being collaborative. You’re triggering the bystander effect.
The psychological research knew this since 1968, that people are less likely to act when responsibility is diffused across a group. Meta’s code review team suspected the same dynamic was slowing reviews where only a team was assigned. They ran an A/B test with 12,500 authors that randomly assign one of the top three experts, or leave it assigned to the team. Individual assignment cut review time by 11.6%.
The mechanism, according to Meta’s researchers, is straightforward. When a specific reviewer is assigned, they feel ownership. When it’s “the team,” everyone assumes someone else is more qualified or has more time. The PR sits. The pattern shows up across studies like this one where reviews with assignment problems take an average of 12 days longer to complete.
The Speed-Knowledge Tradeoff Link to heading
Individual assignment clearly wins on speed, but there’s a cost in terms of knowledge sharing, exposure to different parts of the codebase, learning from different coding styles. When you assign to individuals, you optimize for throughput at the expense of these collaborative benefits.
The burnout risk matters too. If the same few people handle all reviews, you create a bottleneck that eventually breaks. I’ve seen tech leads become overwhelmed precisely because they’re the obvious expert. The team defers to them by default, and suddenly they’re reviewing everything.
Meta tried a workload-aware variant to balance individual assignment with distribution concerns. It showed no statistically significant impact. The paper notes there was “limited real-world effectiveness compared to backtesting predictions.” The tension between speed and sustainable workload distribution remains unsolved.
Beyond Assignments Link to heading
The bystander effect probably matters most in asynchronous review culture. Pair programming effectively bypasses it. You have continuous involvement, zero wait time, no ambiguity about who’s responsible. The problem emerges when multiple people could review but none feels they must.
CODEOWNERS files work on the same principle that affects the org-to-team boundaries, implementing individual (or small-team) assignment at the folder/repo level. Mobile code changes ping the mobile team, not the entire engineering org. It cuts the notification noise and makes responsibility explicit. There’s value in extending this to individual assignment.
I’m curious about the curves here. Darley and Latané showed a drop from 100% to 62% with a group of five. What’s the shape for code review? Two reviewers, five, ten, or assigning to the whole team? Does the cliff happen immediately, or gradually?
And there’s also the workload question Meta couldn’t solve. What would a successful workload-aware individual assignment actually look like? Random selection from the top three experts worked, but maybe there’s a smarter rotation or load-balancing strategy that preserves the ownership effect without burning out your best reviewers. Anecdotally, random assignments in small groups works too effective compared to the investments team had to push otherwise.
Also worth noting that if AI dramatically increases the arrival rate of PRs into your review queue (and I think it does) then making your assignment strategy more efficient becomes more important, not less. An 11.6% improvement in review time might not sound dramatic until your PR volume doubles.