Reviewing Other People's Designs
Design reviews are one of the most asymmetric leverage points available to a staff engineer. A thirty-minute review of a junior or mid-level engineer's design can save weeks of rework — or permanently damage their confidence in their own judgment. The difference is almost entirely in how the reviewer engages.
Most engineers learn to do code reviews. Fewer learn to do design reviews. The mechanics are different, the stakes are higher, and the failure modes are more subtle. You can review a PR to improve the code. Reviewing a design is mostly about improving the engineer.
The Two Failure Modes
Design reviews fail in predictable ways.
Failure mode 1: The takeover. The reviewer rewrites the doc in their head and communicates a better version, rather than engaging with what the author actually proposed. The author walks away with a revised design but no understanding of why the revision is better. The next design they write has the same structural weaknesses.
Failure mode 2: The blessing. The reviewer skims, finds nothing catastrophically wrong, and approves with minor comments. The author feels validated. But the reviewer missed three decisions that are quietly load-bearing and will surface as production incidents in six months.
Both failure modes come from the reviewer not being clear on what their job is. The job is not to produce a better design. The job is to help the author produce designs that are better across the board — this one and the ones after it.
Prepare Before You Review
A design review that happens in the meeting, cold, is a worse review than one that happens after fifteen minutes of async preparation. Before the review session:
The preparation step most reviewers skip: identifying what you do not know about the author's constraints. A reviewer who assumes they have full context and reviews accordingly will often flag "problems" that the author already resolved through constraints the reviewer is unaware of. This is demoralizing and a waste of time.
What to Actually Look For
Design reviews should evaluate five dimensions. Not all five apply equally to every design, but omitting any of them consistently produces blind spots.
1. Problem diagnosis. Does the design solve the right problem? This is the highest-stakes question and the one reviewers address least. If the diagnosis is wrong, everything downstream is wrong regardless of how elegantly it is constructed.
2. Decision coverage. Has the author identified and explicitly decided the things that actually need deciding? Many designs are missing decisions — they describe what will be built without saying why over alternatives, or they elide the decision about what not to build.
3. Failure modes. What happens when the design fails? Not just technical failures — operational failures, adoption failures, integration failures. A design that does not model its own failure modes is an incomplete design.
4. Constraint validity. Are the constraints the author is working within actually fixed? Sometimes constraints listed in a design are self-imposed or inherited from an earlier context that no longer applies. Probing this can unlock better designs.
5. Complexity budget. Is the design as simple as it can be while solving the actual problem? Not simpler — over-simplification is also a failure mode. But many designs introduce complexity that serves the author's interests (interesting problem, familiar tech) rather than the project's interests.
The Mechanics of Giving Feedback
The how matters as much as the what. Feedback delivered poorly generates defensiveness, not learning.
Ask before concluding. Instead of "This approach will have hot-spot contention under write-heavy loads," try "What's your model for write contention when multiple services are updating the same record? Have you load-tested the write path?" The question surfaces whether the author already knows this and has an answer, or whether it is genuinely a gap.
Label the severity. Not all feedback is equal. Engineers waste hours agonizing over a comment that was meant as a passing observation. Be explicit:
[critical] The retry logic here will cause a thundering herd under failure — this
needs to be resolved before we proceed.
[important] The cache invalidation strategy doesn't account for partial updates —
worth discussing in review.
[minor] The naming convention for these variables doesn't match the rest of the
codebase — easy fix, not blocking.
[nit] I'd have modeled this differently but it works — not worth changing.Separate your preference from a real problem. Many reviewers phrase their preferences as problems. "I would have used event sourcing here" is a preference. "This design has no audit trail, which is required by our compliance policy" is a problem. Know the difference and communicate it.
Point to the principle, not just the conclusion. Instead of "Don't use a shared database across services," try "The reason I'd avoid a shared database here is that it couples the deployment cycle of both services and creates a shared operational failure domain — worth thinking through whether that tradeoff makes sense given your reliability requirements." The principle is transferable. The conclusion is not.
Reviewing in the Session
The review session itself should surface the highest-stakes questions, not relitigate every comment in the doc. If you have prepared async comments, the session is for resolution — not recitation.
[0:00–0:05] Author: brief recap of the problem and the recommendation
[0:05–0:15] Reviewer: 2–3 highest-stakes questions — genuinely open
[0:15–0:30] Discussion: let the author lead; add signal when they are missing something
[0:30–0:40] Explicit summary: what is approved, what needs revision, what is open
[0:40–0:45] Author states next steps — not reviewerThe last step matters. The author stating the next steps means they own the path forward. The reviewer stating them means the reviewer owns the design.
When to Approve Despite Disagreement
Approval does not mean "I would have made all the same decisions." It means "this design is sound enough to proceed, the risks are understood, and the team is capable of executing it."
If you genuinely disagree with a decision the author made but:
- the decision is within an acceptable range,
- the author has thought it through,
- the stakes of being wrong are recoverable,
...then approve with your concern recorded. Don't block the design. Block only when you have identified a genuine risk that the author has not addressed or cannot address without more information.
Key Takeaways
- The goal of a design review is to improve the engineer, not just the design — feedback that teaches the underlying principle is more valuable than feedback that corrects this instance.
- The two failure modes are the takeover (reviewer dominates the design) and the blessing (reviewer approves without engaging) — both are failures of clarity about the reviewer's job.
- Prepare before the review session: identify the 2–3 highest-stakes decisions, the assumptions to probe, and what you don't know about the author's constraints.
- Label feedback by severity (critical, important, minor, nit) so authors can prioritize and do not agonize over passing observations.
- Ask questions before concluding — probe whether the author has already resolved a concern before treating it as a gap.
- Approve despite disagreement when the decision is within an acceptable range, the author has thought it through, and the stakes of being wrong are recoverable.