Code Review

For me, code review is usually the biggest – and often most useful – touchpoint between developers. Code changes are the nitty-gritty details, the rubber actually hitting the road in development work. It can be highly collaborative and helpful.

Like anything, it can be poorly utilized. And it probably shouldn’t be the first touch point for collaborationThere are times when code changes may be the clearest and easiest way to think about something, but often this is not the case.

; design documents, pairing, etc. are great ways to share knowledge and build good stuff. But ultimately, changes to the system have to actually become realized, and that happens through code.

In a professional context, a team should generally prioritize code review. Everyone should try to get to new requests or feedback as quickly as possible otherwise the review process can end up spanning a long period of time. This can lead to a fatigue, more context-switching, and a rush to get things closed and landed at the end of sprint/deadline.

In my experience, there is typically at least one round of feedback→updates for any non-trivial change.

The GNU Kind Communications Guidelines are a good reference for how to communicate at all levels of the process.

Purpose

The point of doing code reviews:

  • Build team familiarity with various parts of the code base
  • Build broader team knowledge (that is not necessarily related to the particular code base itself)
  • Build the right thing, in the best way

The author is responsible for their work and should be trusted with it. But ultimately it falls to the team.

It’s not about gate-keeping. It’s also not a trivial, rubber stamp process.

As Author

Write a good title and description. See my commit message section. Reference relevant tickets and provide context.

If it’s a work-in-progress or you are looking for specific feedback on things, tag it with [WIP] in front of the title and state what you are looking for in the description. The project (or team) may have other tags or tagging format, follow the guidelines.

What I’m looking for and the feedback I provide is different for a design or architecture review vs something intended to ship immediately.

Include a “test plan” (or demo) section in the description as well. This describes what you did to verify the changes you are making do what they are suppose to. This helps your reviewer retrace your steps and also offer suggestions on other considerations for the changes that perhaps you haven’t tested.

Hopefully your changes come with some tests as well, and for simple things, the test plan might be “ran the test suite”, but often you’ll want to actually exercise the changes as a part of the larger application and verify some aspect of the changes that can’t capture adequately in the tests.

This article has more on writing good test plans and things to consider for testing your changes.

Write reviewable code. Generally try to keep commit/change sizes small and focused on one thing. The one idea is one commit approach may be worth considering. If you find yourself using words like “and” or “also” in your title that might mean the changes could be broken down betterThis is something I am continuously working at getting better at. So often I’ll find little things that I fix in the course of some other work that I really should break out separately, but just include because I’m lazy. Do your best.

.

Take feedback graciously (which has hopefully been given kindly).

As Reviewer

Read the title and description. Ensure they make sense. If the change references another resource, say a ticket in a separate tracking system. Go read it.

Make sure you understand what is trying to be achieved and why we are trying to achieve it. If you don’t understand the goal, don’t go further into it, ask for clarification.

Hopefully CI is setup that automatically runs linting and tests against all submitted changes.

If automated tooling for linting/formatting/type-checking/building/running tests are not available, I suggest prioritizing getting that setup, but until then, pull down the changes and run all that by hand.

If the build is failing, then those need probably need fixed before spending time on reviewing the code, unless the failure has been acknowledged or is a place where help is needed to figure out how to fix it.

Deploy the changes to your development environment, go through the posted test plan and otherwise exercise the changes to test what you understand the goal to be.

You might find it doesn’t work as you expect or failure conditions not covered or the feature is not intuitive/confusing (which might be okay and is documented, but maybe not).

Then you can start looking at code.

Do a quick skim of all the changes. Try to get a big picture view of things.

Then start back at the top and go over every line. Hopefully whatever review tool you are using (or forced to use) supports building up a set of inline comments; if not, I suggest to take the diff and annotate it yourself as you go.

Things to keep in mind while providing feedback:

  • Address the work, not the person
  • Be flexible on minor nitpicks and be clear they are nits
  • Be firm on important things and be clear they are important things
  • Don’t be afraid the challenge their decisions. Everyone should be held to the standard of explaining the decisions they are making. Do it kindly of course and you don’t have to accept a “just because” or hand wavy answer.
  • If you don’t understand what is going on at first glance, make a comment about it, maybe it gets clear after reading a little further and you can go back and delete the comment, but if it takes a while to make sense, might be a sign it needs an explanatory comment.
  • Point out things you like!

And I’ll reiterate, make your intentions clear with feedback. If something is a suggestion, make sure it’s obviously so. If something is a blocker, state that. If you want a system to follow: conventionalcomments.org

After going over the changes in detail, read over the entire module/file/document with the changes. Don’t get tunnel vision looking only at the diff. This is most applicable to documentation, where a change in one place can invalidate some other part and there’s limited tooling to help catch it, but can be helpful with code too. Localized reasoning doesn’t work best for everything, sometimes it’s helpful to see the whole thing.

In some code review systems, you can have a “sticky” accept/approve status, so you can approve the changes with comments about minor things requesting they are fixed or considered before merging and the author can push up the changes (to record what was done) and keep the approval to merge without having to get the reviewer to come back.

This is a nice effort saver if it works for your team. Sometimes there are folks that see “approved” and immediately merge the changes without looking for/noticing additional feedback and have a hard time breaking the habit. Do what works.

Note that this is generally my approach when working with people I have an established relationship with, that I trust, and/or are at an equivalent or higher “level” of experience/ability/etc.

When dealing with contributions from random people (maybe in an open-source project) or folks that are more junior, more scrutiny and time is usually warranted, being extra clear and precise in wording, linking more reference material, etc.

And code review does not have to be a thrown-over-the-wall practice, doing a live walk-through can be good! But generally there’s still value in having someone review the work who a) isn’t close to the work and b) who doesn’t have the benefit of the person who wrote it talk through it, since that likely going to be the situation of folks coming to the code later. Having someone review with only the written material as reference and note the things that are not understandable is good.

Misc.