I don’t like code reviews.
Specifically, I don’t like them in the context of providing a gate before merge.
Code Reviews are expensive
Performing a code review for someone else’s change takes time and effort. You need to what the change is, what the purpose of the change is, and how that fits together into the bigger picture of what is being built.
They’re also expensive in terms of the performance hit of having to context-switch. If the change is urgent, and the only person available is in the middle of flow or deep work, then this can have a huge context-switching penalty.
Some of this can be automated. There are linters, style-guides, and static analysis tools for almost every language. We’ve recently introduced pull request templates to front-load the answers to “What?” and “Why?”.
However, a lot of it can’t because it requires a real person to cast their eye over the semantics of the code, not just the syntax.
Which leads me to …
Code Review tooling isn’t great
Almost all tools that provide code review functionality focus exclusively on the syntactic change to the code, not the semantic changes.
A diffing algorithm can’t pick up the logic behind a change - it can only tell you that things have been been changes in the form of additions and deletions.
I’ve had several discussions in my career about desire for diffing tools that look at changes to the overall program structure rather than focusing on surface-level changes.
Right now, however, it’s very easy to fixate on things like individual coding style which is more exposed by text-diffing.
Smaller changes means more overhead
Reviewing a lot of changes once is painful and time-consuming.
Large change-sets are inherently risky by the amount of things being changed, but they also become much harder to review. This also adds risk because semantic errors might not be caught, or the level of code quality drops because no-one has the time to review that much code.
The obvious step is to fix the pain point - large changes are hard to review, let’s have multiple smaller changes instead!
Only thing is, you’re still spending more time on code review because there’s a fixed overhead per review in terms of time, energy, and context-switching as mentioned earlier.
Long feedback loops
Anything that happens between development and production widens the feedback loop and slows the rate of change.
In a lot of cases, we’re okay with this trade-off.
It takes longer to write tests and code than just write the code but we accept that the long-term benefit is (hopefully) higher code quality and reduced risk.
It takes longer to have a multi-stage automated test-and-deployment process than just
scp-ing the files to production servers, but automation reduces human error.
Code reviews make the process take longer, and I don’t believe it provides any significant benefits over pair programming.
I’m not denying that pairing has its downsides - working in a pair for 6 hours per day, 5 days a week can be absolutely EXHAUSTING and not everyone likes to work that way. But I just think code reviews are worse as a replacement for “more eyes on the code”.
It takes time and energy to build an environment such that, for example, disparities in seniority have little effect on how a code review is received.
If you’re a senior reviewing code from someone more junior than you, the way that you communicate your opinions and things that you would change can have huge effect on their confidence and how they feel about future code they write.
Providing good code reviews is a learned skill, and people smarter and more learned than me have written lots on the topic. We have the GDS Way to provide review guidance.
In particular, I heavily agree with the rule of thumb to “minimise suffering for everyone involved - authors, reviewers, and users” and her three criteria, parentheses mine:
- Is it true? (Opinions might not be helpful)
- Is it necessary? (Getting off-topic or providing irrelevant content)
- Is it kind? (Just be kind ffs)
What’s the alternative?
I strongly believe in pairing or mob programming, where possible. It’s almost “real-time” code review because you have multiple sets of eyes working in the area.
Teams might find engaging in after-the-fact “code critique” a useful alternative to an external set of eyes on a code review.
The team as a whole takes a step back from what’s being worked on, in the context of feature development, and looks at the changes with a holistic view instead. How does this change we’ve deployed affect the quality of the codebase as a whole? Can we improve this, regardless of who wrote it?
This requires building an environment such everyone in your team feels comfortable critiquing even the most senior developer’s code. By submitting your code for critique, it becomes about the team collaborating on ways to make the team’s code as good as it can be.
Compare this with code reviews where there’s a “Good” or “Bad” outcome - either the review is approved or it’s not.
I think code reviews can be good, if they’re approach with a whole-team attitude of writing good code rather than a cargo-cult ritual that everyone needs to do in order to progress their work.
November is National Blog Posting Month, or NaBloPoMo. I’ll be endeavouring to write one blog post per day in the month of November 2019 - some short and sweet, others long and boring.