Have you ever wondered if your teams code reviews are in line with industry standards? Do you sometimes not know if you’re going at a good pace or behind? Having a solid reference to benchmark against can be helpful so I’ve gone ahead and done some research and summarized key findings here that I thought might be useful to reference and ponder on.

Review Frequency and Speed

Google

  • Median 4 MRs per week [1]
  • 80 % reviewers review < 10 MR’s per week [1]

Time to get initial Initial feedback on an MR/PR:

Google

  • < 1 hour for small changes [1]
  • 5 hours for large changes [1]

Overall (all code sizes) Latency for the entire review process:

  • Google - 4 hours (short largely due to the emphasis on small changes at Google) [1]
  • AMD - 17.5 hours [1]
  • Microsoft ~ 17 hours (24 hours in a different study) [1]

Review Size (median lines modified - i.e. added/removed):

  • Google - 24 [1]
  • Taken from [2]
  • AMD - 44
  • Lucent- 263 [1] **
  • Microsoft (Bing/Office/SQL) - (90 - 150) [2]
  • Android - 44 Lines [2]

** The Lucent data was from 1993-1995 when coding tools/practices were different.

Median Number of Reviewers

  • Google - 1 Reviewer [1]
  • Most other companies and OSS - 2 Reviewers [2]

Researchers Rigby & Bird [2] found that

Rigby and Bird also found a “minimal increase in the number of comments about the change when more [than 2] reviewers were active” [33], and concluded that two reviewers find an optimal number of defects.

Time spent reviewing per week for every engineer

  • OSS - 6.4 hours [1]
  • Google - 3.2 Hours [1]

Suggesting Reviewers

  • Reviewers are mostly suggested (automatically) based on those who recently changed the same file/codebase. [1]
  • New team members are added to expose them to parts of the codebase. [1]

For changes that could be sent to anyone on the team, many teams use a system that assigns reviews sent to the team email address to configured team members in a round-robin manner, taking into account review load and vacations. [1]

Reviewability

Reviewability is a term that is used to describe how “easy” it is to review code. A high readability is typically composed of a small number of changes. The changes that are made are written in a clean consistent manner and adhere to the code bases format and style. Writing code with the aim to have a “high” reviewability score is important so that reviewers can be performed faster.

Interesting Finds/Observations

Review has changed from a defect finding activity to a group problem solving activity [2]

Citations

[1] Modern Code Review: A Case Study at Google

[2] Convergent Contemporary Software Peer Review Practices