Antifragile Software: Understanding human errors (code reviews)

Marek Kowalcze
5 min readApr 19, 2021

In this post we will go through some of the cognitive biases and thinking errors which could be found at different stages of the software development process. Let’s start with my favorite one — code reviews!

Cognitive Biases during Code Reviews

Code reviews can take a long time in our work (they should!). Since it is an activity in which conflicting opinions collide, it is easy to overlook certain issues and be fully objective. In this section we’ll go through following biases:

  • Ignoring important details in large Pull Requests
  • Unwillingness to abandon wrong approach
  • Defending code we have authored
  • Focusing too much on not important details

Bias: Ignoring important details in large Pull Requests

Caused by Contrast effect, Weber–Fechner law

Why is this bad?

  • Large pull requests are simply harder to review. When reviewing bigger changes in code people tend to ignore the details that usually would be inspected more carefully. The result is that important details might be overlooked. This, obviously, can lead to more defects in our system. What’s important to note, is that the effort behind doing proper code review is not linear compared to change-set size. There are also other valid reasons to create smaller PRs.

What can we do?

  • (author) Try dividing pull requests into smaller chunks. It is challenging sometimes, but the effort is worth it.
  • (author) Use design reviews before actual coding, split work into smaller chunks on planning sessions.
  • (author) If there is no other option, try to at lest organize your PR as a story where each commit represents one conceptual change. Additionally, you can add inline GitHub comments, so that it’s easier for reviewer to get proper context of each change.
  • (reviewer) If we receive a large PR to review, it might be a good idea to run several iterations on it, in each focusing on a single aspect (requirements, logic errors, code style, performance, etc.). If we decide to go this way through review, just let the author know about it :)

Details

  • Now, some psychology. According to Prospect Theory, splitting the gain (in this case complete PR represent some portion of our work done — it is our “gain”) into more smaller parts, allows us to have more “smaller victories”, which results in more positive effect overall. So what’s stopping us from doing this for our large Pull Request? Usually those are technical reasons (inability to split the work and not break the app) or team process constraints (1 JIRA ticket = 1 PR; testers preferring to perform one big check). The reason might be also psychological — as an author we rather experience single big negative effect rather than multiple smaller ones. So in some circumstances it’s easier to receive one large code review with negative comments than split it into more parts. More of this topic is explored in the Prospect Theory Suspect blog post.
  • Ultimately, it seems that we have an author/reviewer conflict here which we should try to balance, because we’re acting in both roles at the end of the day.

Bias: Unwillingness to abandon wrong approach

Caused by: Sunk cost fallacy

Why is this bad?

  • We try to finish started implementations even if outcomes are not good. It is hard to stop working on something which already takes a lot of our time and energy, even if dropping it is the best decision at the current time. Pushing wrong implementations (or even “wrong” features) just to finish some “tasks” will probably end up with more problems in the future on top of the first bad decision. Sometimes we don’t have a clear alternative. But if we do, remember that it is better to lose one week of one developer work than months for the whole team later on.

What can we do?

  • Discuss technical solutions early on with the rest of the team. Talk with other teams, a fresh look from someone not biased with the code base can really help with finding out weak points in your solution.
  • Remember that code is shared and also a temporary thing —there is a good chance that your perfectly crafted code will be changed anyway, sooner or later.

Bias: Defending code, which we have authored

Caused by: Self serving bias, Confirmation bias, Choice-supportive bias

Why is this bad?

  • Admit it or not, we tend to defend current solutions more if we happen to be an author of it. That’s why during code reviews we might be more skeptical of changes in places which we have created or changed recently. It can lead to situations that we’ll be looking more for reasons to keep the previous (our) approach.

What can we do?

  • First of all, it is good to review changes in places we touched recently. We might have the best knowledge of how things should work there. But let’s not mix expected software behavior with the implementation. One solution is to rely more on the second reviewer in such cases, leave last word to someone else in case of disagreement.

Bias: Focusing too much on not important details

Caused by: Bike-shedding, Sayre’s law

Why is this bad?

  • When we are faced with too much information at once in a single Pull Request, we might be overwhelmed by the more difficult parts. It’s simply easier for us to focus on less important things and move the discussion towards those areas. This bias is called the Law of Triviality (or bike-shedding). In the worst case, where complex code change will get many trivial comments from several reviewers, none of them focusing on the core idea, the most important part will not be reviewed properly. This however wouldn’t be noticed in the light of a large number of (less important) comments.

What can we do?

  • (reviewer) leave the cosmetics at the end and start with the most crucial changes and try to find different ways it can impact the system.
  • (reviewer) Focusing more on trivial things might be a sign of not enough domain knowledge of the reviewers.
  • Automate less important checks (code style), so that you can’t really discuss them in code reviews.
  • In the same way, decide on other code conventions (which can’t be automated) and write them down in a separate repository. If someone really needs to start discussion on one of those subjects, there is a dedicated place for it, other than your PR.

Summary

As you can see, it is easy to miss crucial code changes and, at the same time, direct discussion to less important details. Our biases also make it harder to criticize solutions in which we “invested” more time. While we can automate some of the small aspects of code reviews, it looks like it will stay a mainly manual task for a while. Hopefully, this post was a useful introduction on how to avoid some of the most common errors in this area.

--

--