Code reviews are a critical step in the development cycle to improve code quality and maintainability. Yet, many engineering teams forgo code reviews and instead, rely on automated testing or manual QA to determine if a piece of code is ready for release.
Research has shown that design and code inspection yields a much higher defect detection rate than software testing, as shown in Steve McConnel’s book CodeComplete.
But how do you maximize the benefits of code reviews — especially when managing a distributed and asynchronous team?
First, ensure your team is following some tried-and-true best practices. Second, leverage modern tools to enable collaborative async communication. Let’s dig in:
You don’t have to reinvent the wheel. Build a solid foundation by starting with these best practices:
Create a checklist with criteria your team should adhere to during the code review. It should cover the following:
Remove redundant comments in the code to improve readability.
Our attention to detail drops off around 60 minutes, so it’s best for team members to conduct shorter code review sessions more frequently. A shorter amount of time also means you should set an LoC limit for each session (e.g., around 200-400 lines). In fact, a Cisco report found that programmers’ ability to spot issues drops after 200 lines.
Cramming too many code changes into one review can affect the quality of the output. Research on peer code review in distributed software development found that an increase in patch size is associated with a decrease in the effectiveness of the review process. Reviewers become less engaged, offer less input, and are more likely to overlook edge cases.
Finally, don’t rush the process. A study found that reviewers who go through fewer than 400 lines per hour have an above-average ability to find bugs. Meanwhile, those who progress faster than 450 lines per hour achieve a below-average defect density.
What gets measured gets done — objective measurements help you track the efficiency and effectiveness of code reviews, analyze how it supports the software development process, and understand the time you should set aside to complete a project. Here are the key metrics to track and what they mean:
Inspection rate: The time it takes for your team to review a set amount of code. It can help you gauge the readability of the code.
Code reviews aren’t just about pointing out what needs to be fixed. Reviewers should focus on creating a collaborative and supportive environment by taking time to explain the reason behind each request or recommendation. They should also pose open-ended questions to encourage discussion and knowledge sharing.
A detailed explanation helps speed up the review process because it prevents the need for additional back-and-forth between the author and reviewer.
To err is human. Even the best programmers make mistakes, and the most meticulous reviewers overlook bugs. As part of your continuous integration/continuous delivery (CI/CD) pipeline, a systematic code review includes an automated testing component and security checks to ensure that you have a high-quality product.
It’s often more challenging for distributed teams to implement code review best practices due to the lack of effective communication channels:
Sharing knowledge and input in written format, such as email, can turn something you can simply point out on the screen into a novella. Nuances often get lost, and subsequent exchanges may get overlooked amidst the endless threads of “reply all.”
Plus, meetings may not be the most cost-effective way to conduct a code review. A study found that development teams spend 75% of their time in meetings and 25% reading. Yet, reading reveals 80% of defects and reviewers are 12x more efficient at identifying issues through reading than meeting.
An effective code review process should allow reviewers more time to examine the code and give them the tools to effectively communicate their findings asynchronously. Here’s how to recreate the immediacy of in-person interactions without endless meetings:
Written comments often focus on individual changes and don't encourage team members to take a step back and consider how the changes impact the whole system, future development tasks, and ease of maintenance.
The linear nature of most async communication methods often limits the scope of these discussions, making it hard to loop in the larger context and emphasize each item appropriately. If you branch off to a different topic, you risk digressing to the point where the team loses touch with the original subject.
Bubbles makes it a breeze to add hierarchy and context to your code review by walking your team through your comments via a screen share. You can switch among various tabs to address the specifics of the code in question while illustrating how it fits within the big picture without losing sight of the line of code in question.
Team members can add their input, timestamped to show what it refers to, so they can discuss concerns pertaining to the larger context in a related thread without everybody wondering, “how did we get here, and how does this relate to that line of code?” after a few exchanges.
With dozens of comments on hundreds of lines of code, how do you manage multiple threads of conversations effectively and efficiently so everything gets addressed?
With bubbles, you can leave separate comments, so each one pertains to a specific line of code or issue in the video. Team members can discuss each and follow up to ensure that it’s resolved without plowing through a laundry list of feedback in an email or document.
Each item can branch out into its own discussion if necessary to keep comments manageable. You can tag the appropriate team members to jump into the conversation instead of making everyone read through a lengthy “reply all” thread that will cause massive confusion and frustration (be honest, who reads everything?).
Comments perceived as hostile or disrespectful can impact how team members receive feedback and their morale. We believe that people don’t want to be abrasive or impulsive intentionally, but most feedback methods don’t allow the opportunity or the space for reviewers to be thoughtful and empathetic.
Asynchronous feedback via a code review tool, email, or screenshot allows team members to go through everything before commenting. But they lack the immediacy to communicate nuances through gesture and tonality — making it easier for the audience to misinterpret the tone and intention of the comment.
On the other hand, synchronous meetings (e.g., video calls) help team members grasp the nuances of the comments. But the pressure for immediate response can make people reactive, so the conversation becomes driven by the first thought instead of the best thought.
How do you get the best of both worlds? Create a bubble to capture context and nuances by sharing your feedback through video. Meanwhile, async communication means team members can watch the entire video and digest the comments before responding to share the best thoughts instead of the first thing that comes to mind.
Asynchronous communication allows teams to conduct code reviews efficiently. When supported by the right tools, they can review more codes in less time and collaborate constructively to address the larger context and improve the codebase over time.
Bubbles helps you improve async code reviews by organizing the threads, minimizing misunderstandings, and encouraging two-way collaboration so your team can continuously integrate feedback to create a better product.
There’s no excuse not to make a bubble today — you don’t have to sign up to get started, and your team doesn’t need to install anything except a browser to view your bubble. All you have to do is click the link and record your first bubble. Get started with bubbles by downloading our Chrome extension and see how easy it is to conduct an effective code review.