At some point, in any company, there is always a new process introduced. This process promises to make things better, safer, and more efficient. In theory, it's brilliant. On paper, it addresses all the potential pitfalls. But then, it hits the real world, and what sounded like a stroke of genius becomes a bureaucratic nightmare, often leading to the exact opposite of its intended outcome.
One of these processes I often deal with when a company reaches a certain size, is a fail safe for the software development life cycle. In particular, code review. With many developers building applications day in and day out, there is always a chance for some buggy code to get introduced and bring down the system. For that you need a rigorous code review process that can catch bugs before they are merged in the code base. You can resolve this by training developers to follow a process, or you can enforce it with software. Many are often tempted by the software solution especially when presented with a fancy landing page and some presentations.
So here is how this one went. On this repo that any team contribute to, reviewers need to raise at least 2 errors before it can be merged. That means, you push some code, the reviewers raise an issue, you push some more code, they raise another issue, you fix them all, then we are good to deploy. Why two errors? Well because historically, all previous pull requests (PR) have at least two errors in them.
This makes sense in theory. Even the most perfect code may have a flaw that fresh pair of eyes can catch. It reminds me a bit of my school days in the French system. Back then, if you got a 15/20 on an essay, you were practically a genius. A 16? Your parents would be getting a call to discuss your boundless potential. But an 18? Oh, that meant you cheated. The unspoken rule was that nothing was ever truly "perfect." This ingrained skepticism about perfection, while perhaps extreme, has continued to haunt me when I see systems built on similar premises.
The Inevitable Slide into "Manufactured Compliance"
So, you implement this "find at least two errors" rule in your code review process. Initially, it might even work. Teams genuinely look for issues, and perhaps they find them. But what happens when a piece of code is genuinely, remarkably clean? What happens when a bug fix is so contained and straightforward that there's simply nothing significant to comment on?
This is where the theory collides with reality. Developers are under pressure to deploy. The application needs to go out. And suddenly, that "minimum of two errors" becomes a roadblock. What do people do? They start creating comments for the sake of compliance.
Example 1: The Trivial Takedown
Imagine a pull request for a simple text change on a website – fixing a typo, updating a date. The code is literally one line. Yet, the review process demands two comments. So, you start seeing things like:
- "Add a comment explaining what this variable does (even though it's obvious)."
- "Could we rephrase this commit message slightly?"
- "Consider adding a newline at the end of the file (a classic!)"
These aren't truly actionable or value-adding comments. They're placeholders, designed to satisfy a metric.
Example 2: The Emergency Deployment Dilemma
Now, picture an emergency situation. A critical bug is impacting users, and a fix needs to be deployed yesterly. Your team has a clean, tested PR ready to go. But wait! The process dictates that at least two comments must be raised. This "fail-safe" system, designed to prevent errors, is now actively slowing down the resolution of a real-world problem. You're forced to scramble for benign observations just to get the fix out the door.
The Perversion of Metrics: When the Process Becomes the Goal
The more frequently teams are forced to manufacture comments, the more this "compliance by creation" becomes the new normal. The original intent – catching real errors and improving code quality – gets diluted. The metric itself, "number of comments raised," overshadows the actual goal of quality assurance.
Developers, being rational actors, will optimize for the metric they're being measured against. If the metric is "raise two comments," then raising two comments becomes the objective, regardless of whether those comments genuinely contribute to better code. Real errors, subtle bugs, and critical design flaws start to go unnoticed because the focus has shifted from identifying problems to satisfying a numerical quota.
Joel Spolsky, often talks about the importance of daily deployments and maintaining a constant release rhythm. His point is that if you can't push a simple text change quickly, your entire deployment pipeline is too heavy. This "minimum of two errors" rule perfectly exemplifies such a heavy process. It adds friction even when there's no genuine friction to be had, transforming a well-intentioned safeguard into a drag on productivity and, ironically, a potential source of overlooked issues.
Processes are vital. They bring order, ensure consistency, and can genuinely improve quality. But we must constantly scrutinize them, especially when they move from the drawing board to the daily grind. The moment a process starts requiring people to generate "busy work" or contort their actions to satisfy an arbitrary metric, it's time to re-evaluate.
The goal isn't to find two errors; it's to ship high-quality, reliable software. When our processes inadvertently encourage a performative adherence to metrics over genuine value, we've created a system that, despite its theoretical appeal, is destined to fail in the messy, unpredictable reality of the real world.
Comments
There are no comments added yet.
Let's hear your thoughts