The Senior Developer's Guide to Code Reviews
Early in my career, I thought code reviews were about catching bugs. I'd leave dozens of comments about variable naming, missing semicolons, and style inconsistencies. My reviews were thorough. They were also demoralizing, unproductive, and often wrong about what actually mattered.
As a senior developer, I've learned that code reviews are about something much bigger: they're about building shared understanding, maintaining system integrity, and growing the entire team's capabilities. The bugs you catch in review are a bonus. The real value is the conversation.
This guide shares what I've learned about reviewing code effectively — not just catching problems, but doing it in a way that makes everyone better.
Part 1: What Code Reviews Are Actually For
Google's engineering practices documentation — one of the most cited resources on code review — states that the primary purpose of code review is to ensure the overall code health of the codebase improves over time. Not perfection. Improvement.
In practice, code reviews serve five purposes:
- Knowledge sharing. When you review someone's code, you learn about a part of the system you might not work on. When they read your review, they learn your mental model. Over time, the entire team understands the entire system better.
- Design validation. Does this approach fit the system's architecture? Are there simpler alternatives? Is this the right abstraction boundary? These high-level questions are more valuable than any line-level comment.
- Bug prevention. Yes, reviews catch bugs — but primarily logic bugs and edge cases, not syntax errors (that's what linters and type systems are for).
- Maintainability. Will the next developer who reads this code understand it? In 6 months, will you understand it?
- Team culture. How you review code defines your team's culture. Harsh reviews create fear. Thoughtful reviews create trust and growth.
Part 2: The Review Pyramid
I think about code review comments in layers, from most to least important:
Layer 1: Correctness (Critical)
Does the code do what it's supposed to do? Are there logic errors, race conditions, or security vulnerabilities?
// Example: Spot the bug
async function transferFunds(fromAccount, toAccount, amount) {
const from = await getAccount(fromAccount);
const to = await getAccount(toAccount);
if (from.balance >= amount) {
from.balance -= amount;
to.balance += amount;
await updateAccount(from); // What if this succeeds...
await updateAccount(to); // ...but this fails?
}
}
// Review comment:
// "This has a consistency issue. If updateAccount(to) fails after
// updateAccount(from) succeeds, the money is deducted but never
// credited. This needs to be in a database transaction.
//
// Suggested approach:
// await db.transaction(async (tx) => {
// await tx.updateAccount(from);
// await tx.updateAccount(to);
// });"
Layer 2: Design (Important)
Is the approach sound? Does it fit the system's architecture? Are there simpler alternatives? According to Martin Fowler's analysis on software quality, design issues caught in review are 10-100x cheaper to fix than design issues found in production.
// Example: Design concern
class UserService {
async getUserWithOrders(userId) {
const user = await this.userRepo.findById(userId);
const orders = await this.orderRepo.findByUserId(userId);
const payments = await this.paymentRepo.findByUserId(userId);
const reviews = await this.reviewRepo.findByUserId(userId);
const notifications = await this.notificationRepo.findByUserId(userId);
return { user, orders, payments, reviews, notifications };
}
}
// Review comment:
// "This method is doing too much. It's loading 5 different entities
// for every call. Do all callers need all of this data?
//
// Consider:
// 1. Separate methods for each concern (getUserProfile, getUserOrders, etc.)
// 2. If callers need different subsets, use a field-selection pattern
// 3. If this is for a dashboard, consider a dedicated read model/view"
Layer 3: Maintainability (Important)
Will someone else understand this code in 6 months? Is it well-structured? Are the names clear?
// Example: Maintainability concern
function process(d, f) {
const r = [];
for (let i = 0; i < d.length; i++) {
if (f(d[i])) {
r.push(transform(d[i]));
}
}
return r;
}
// Review comment:
// "Could we use more descriptive names? 'd', 'f', and 'r' don't
// convey intent. Also, this is essentially Array.filter().map().
//
// Suggestion:
// function filterAndTransform(items, predicate) {
// return items.filter(predicate).map(transform);
// }"
Layer 4: Performance (Situational)
Is there a performance concern? Only raise this when it matters — premature optimization is the root of all evil, as Donald Knuth famously observed.
Layer 5: Style (Low Priority)
Naming conventions, formatting, import order. This should be automated. If you're leaving style comments in code reviews, set up ESLint, Prettier, and pre-commit hooks instead.
Part 3: How to Write Good Review Comments
The Comment Formula
Every review comment should follow this pattern:
1. [Category] What you observed (fact, not opinion)
2. Why it matters (impact)
3. What you suggest (constructive alternative)
// Bad review comment:
"This is wrong."
// Good review comment:
"[Bug] This query doesn't handle the case where `userId` is null,
which can happen for anonymous users. This would throw a database
error in production.
Suggestion: Add a null check before the query, or use a default
anonymous user ID."
// Also good (for non-blocking suggestions):
"nit: Consider renaming `data` to `jobListings` for clarity.
Not blocking approval."
Prefix System
I use a prefix system that communicates severity:
| Prefix | Meaning | Blocks Approval? |
|---|---|---|
| [Bug] | This will cause incorrect behavior | Yes |
| [Security] | Security vulnerability | Yes |
| [Design] | Architectural concern | Usually |
| [Perf] | Performance issue | Depends on severity |
| [Question] | I don't understand this — explain? | No |
| [Suggestion] | Non-blocking improvement idea | No |
| nit: | Nitpick — trivial, take it or leave it | No |
| [Praise] | This is well done | N/A |
The Tone Matters
Research from ACM's study on code review communication shows that the tone of review comments significantly impacts whether they're addressed and how the author feels about the process.
// Adversarial (bad):
"Why did you do it this way? This doesn't make any sense."
// Collaborative (good):
"I'm trying to understand the reasoning here. Was there a specific
requirement that led to this approach? I'm wondering if [alternative]
might work better because [reason]."
// Commanding (bad):
"Change this to use a Map instead of an object."
// Suggestive (good):
"A Map might be a better fit here since the keys aren't known at
compile time. It would also give us O(1) lookups and the .size
property. What do you think?"
// Passive-aggressive (bad):
"I guess this works..."
// Direct and kind (good):
"This works! One thought: if we add [X], it would also handle
the edge case where [Y]. Worth considering?"
Part 4: What to Look For
The Senior Developer's Checklist
When I review a PR, I check these things roughly in this order:
- PR description. Does the PR explain what it does and why? Can I understand the change from the description alone? If not, I ask the author to improve it before reviewing code.
- Architecture fit. Does this change fit the system's design? Does it introduce new patterns or dependencies? Is there an ADR or design doc for large changes?
- Edge cases. What happens with empty inputs, null values, maximum sizes, concurrent requests? I mentally trace through the unhappy paths.
- Error handling. Are errors caught and handled appropriately? Are error messages useful for debugging? Are there any silent failures?
- Security. Input validation, authentication checks, authorization, SQL injection, XSS. The OWASP Top 10 is a good mental checklist.
- Testing. Are there tests? Do they test the right things? Are they testing behavior or implementation details?
- Readability. Can I understand this code without the PR description? Are the names clear? Is the complexity justified?
Things I Don't Look For
- Formatting. Automated by Prettier.
- Linting violations. Automated by ESLint.
- Type errors. Caught by TypeScript compiler.
- Import order. Automated by eslint-plugin-import.
- Test coverage numbers. CI reports this automatically.
Automate everything you can so you can focus on what only humans can catch.
Part 5: Review Workflow and Speed
Response Time
Google's engineering practices recommend reviewing PRs within one business day. I aim for 4 hours for normal PRs and 1 hour for urgent fixes. Slow reviews block the author, create context-switching costs, and increase merge conflict risk.
Stacked PRs
Large changes should be broken into smaller, reviewable PRs. My rule: if a PR changes more than 400 lines, it should probably be split. Research from SmartBear (cited in their code review best practices) shows that review effectiveness drops dramatically after 200-400 lines of code.
When to Approve
A PR doesn't need to be perfect to be approved. It needs to improve the codebase. If the code is correct, well-designed, and maintainable, approve it — even if you'd have written it differently. Your preferences are not requirements.
// My approval criteria:
// ✅ No bugs or security issues
// ✅ Design fits the architecture
// ✅ Tests cover the important paths
// ✅ Code is readable and maintainable
// ✅ PR description is clear
// Things that do NOT block approval:
// ❌ I would have named this variable differently
// ❌ I would have used a different library
// ❌ The approach is different from what I would have chosen (but still correct)
Part 6: Reviewing as a Growth Tool
Mentoring Through Reviews
Code review is one of the most effective mentoring tools available. For junior developers, every review is a learning opportunity. But the way you deliver feedback determines whether they learn or just comply.
// Teaching moment (good):
"[Suggestion] This is using Promise.all() which is great for parallel
execution. One thing to consider: if one promise rejects, all others
are discarded. Since these API calls are independent, you might want
Promise.allSettled() so you can handle partial failures gracefully.
Here's a blog post that explains the difference well:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise/allSettled"
// Missed teaching moment (just a command):
"Use Promise.allSettled here instead."
The Praise Ratio
For every constructive criticism, try to find something genuinely praiseworthy. This isn't about being "nice" — it's about reinforcing good patterns so they're repeated.
// Example praise comments:
"[Praise] Great error handling here — the fallback to cached data
is exactly the right pattern for this use case."
"[Praise] I like how you extracted this into a separate utility.
It's going to be useful in the notifications module too."
"[Praise] This test covers an edge case I wouldn't have thought of.
Nice catch."
Part 7: My Opinionated Take
After years of reviewing and being reviewed, here's what I believe:
1. Style comments are a waste of everyone's time. If you're commenting on formatting, indentation, or import order, you're wasting cognitive energy that should go toward design and correctness. Automate style enforcement with tools.
2. Approval doesn't mean endorsement. When I approve a PR, I'm saying "this is good enough to merge and improves the codebase." I'm not saying "this is how I would have written it." Blocking a PR because you'd have chosen a different approach (when both work) is gatekeeping, not reviewing.
3. The best review comment is a question. "Why did you choose this approach?" is more productive than "You should have done it differently." Questions invite discussion. Statements invite defensiveness.
4. Fast reviews are more valuable than thorough reviews. A good-enough review in 2 hours is better than a perfect review in 2 days. The cost of blocking a teammate for 2 days almost always exceeds the cost of a minor issue slipping through.
5. Review the PR, not the person. "This code has a race condition" is different from "You wrote a race condition." The distinction matters enormously for team dynamics.
Action Plan
This Week
- Set up automated formatting and linting so you never comment on style
- Adopt the prefix system ([Bug], [Suggestion], nit:) in your reviews
- Add at least one praise comment to every review
- Aim to respond to reviews within 4 hours
This Month
- Create a team code review guide documenting expectations
- Set up a PR template that requires description, testing notes, and screenshots
- Discuss review culture in a team retro — ask how reviews feel
- Track average review turnaround time and set a team goal
Ongoing
- Use reviews as mentoring opportunities for junior developers
- Regularly revisit and update your review checklist
- Ask for feedback on your own reviews — "Was my review helpful?"
Sources
- Google Engineering Practices: Code Review
- Martin Fowler: Is High Quality Software Worth the Cost?
- SmartBear: Best Practices for Code Review
- ACM: Communicating in Code Review
- OWASP Top 10
I'm Ismat, and I build BirJob — Azerbaijan's job aggregator scraping 80+ sources daily.
