Compound interest is the most cited concept in finance. Everybody understands it in the context of money: small, consistent returns accumulate into outsized results over time.
Almost nobody applies the same logic to decision quality.
A weekly review that improves your process by 1% doesn't feel like much. After one week, it's invisible. After one month, it's barely noticeable. After a year, you're operating with a decision process that's 67% better than where you started. After two years, it's unrecognizable.
That's compounding. And it works on process the same way it works on capital — but only if you're consistent, and only if you're reviewing the right thing.
What most reviews get wrong
The standard review looks at results. What happened? Did we hit the target? What was the P&L? Who performed well?
This feels productive but teaches the wrong lessons. Results are the output of a process filtered through variance. Reviewing results tells you about the output — it tells you almost nothing about the process that produced it.
A good month doesn't mean the process was good. A bad month doesn't mean the process was bad. Evaluating the output without evaluating the machine that produced it is how organizations systematically reinforce luck and punish sound process.
The compounding review doesn't evaluate results. It evaluates the system that produces them.
The three time horizons
Effective review operates on three cycles, each serving a different purpose:
Weekly: Pattern detection. The weekly review is close to the action. Its job is to catch emerging patterns before they become habits. What protocols were followed? Which were skipped? Were the skips justified or emotional? Did any situation arise that the system wasn't designed for?
The weekly review isn't deep. It's frequent. Its value comes from cadence, not depth. A 30-minute weekly review that happens every week is worth more than a 4-hour monthly review that happens sporadically.
Monthly: Process evaluation. The monthly review has enough data to evaluate patterns rather than individual events. Are the same protocols being skipped repeatedly? Is the system producing consistent outputs, or is there variance that suggests a gap? How does this month compare to the system's baseline?
This is where outcome-separation matters most. Evaluate the process first, blind to results. Then reveal the results and compare. When process quality and result quality diverge — good process, bad results, or bad process, good results — that divergence is the highest-value data point.
Quarterly: System evolution. The quarterly review is strategic. It asks: should the system itself change? Not "did I follow the rules?" but "are the rules still right?" This is where version upgrades happen — adding new protocols, removing ones that no longer serve, adjusting parameters based on accumulated evidence.
The quarterly review is also where you zoom out and measure the rate of learning itself. Is the system improving faster or slower than previous quarters? Are the problems getting more sophisticated (a sign of progress) or staying the same (a sign of stagnation)?
The outcome-blind protocol
Every review — weekly, monthly, quarterly — should include an outcome-blind phase. The mechanics:
Phase A: Process evaluation. Review the decisions made during the period. Evaluate the quality of the process: were protocols followed? Were the right inputs gathered? Were the decisions well-structured given the available information? Score each decision on process quality.
This evaluation happens without knowing the results. The reviewer (even if it's just you reviewing yourself) doesn't see the outcomes until Phase A is complete.
Phase B: Outcome integration. Reveal the results. Compare them to the process scores. The four quadrants:
- Good process, good outcome — reinforce. The system worked.
- Good process, bad outcome — reinforce the process, investigate the variance. This is noise, not signal. Don't change the system because of it.
- Bad process, good outcome — correct the process. This is the most dangerous quadrant because it rewards the wrong behavior. The luck will regress; the bad process won't correct itself.
- Bad process, bad outcome — correct the process. The system is working as designed (bad inputs → bad outputs).
The outcome-blind phase prevents the most common review error: changing a good process because of a bad outcome. This error compounds negatively — each unwarranted change degrades the system, which produces worse outcomes, which triggers more changes.
What to track
The compounding review needs metrics — not result metrics, but process metrics. Things that measure the quality of the decision-making system itself:
Protocol compliance rate. What percentage of decisions followed the full protocol? This is the most basic process metric. If compliance is below 80%, the process is too complex, the enforcement is too weak, or the friction is too high.
Gate failure rate. How often did gatekeeping checks flag a problem? A consistently low rate might mean the gates are working (few problems to catch) or that the gates are too weak (problems aren't being detected). Track this over time to calibrate.
Decision documentation rate. What percentage of significant decisions were documented at decision time (not retrospectively)? Documentation that happens after the outcome is contaminated. Only real-time documentation is clean.
Omission detection rate. How many omissions were identified? This one trends upward as the review process matures — you get better at seeing what you didn't do. An increasing omission detection rate is a sign of a healthy review practice, not a sign of increasing failure.
System version cadence. How frequently is the system being updated? Too frequently suggests reactive tinkering. Too infrequently suggests stagnation. The sweet spot is changes driven by quarterly evidence, not by individual events.
The inchworm effect
There's a phenomenon in process development that feels discouraging but is actually a sign of progress: the problems get harder.
Early in the review cycle, the issues are obvious. Skipped protocols. Missing documentation. Emotional decisions that were clearly wrong in hindsight. These get corrected quickly.
As the obvious problems are fixed, subtler ones emerge. Biases that were masked by the larger errors. Edge cases that the system doesn't cover. Situations where the protocols are followed but the interpretation is flawed.
This feels like failure — "the problems never end." It's actually progress. The problems are climbing the sophistication ladder. You're not repeating the same mistakes. You're encountering new ones at a higher level.
Imagine an inchworm climbing a wall. At any given moment, the front end is stretching to a new height while the back end is still catching up. From outside, it might look like the inchworm isn't moving. From above, the trajectory is clearly upward.
The compounding review makes this trajectory visible. Without it, you can't tell the difference between stagnation and sophisticated progress — and you're likely to quit just when the compounding is starting to pay off.
The consistency principle
None of this works without consistency. The compounding review is not a tool you use when things go wrong. It's a practice you maintain regardless of results — in good periods and bad, when you feel like it and when you don't.
The temptation is always to skip the review when things are going well ("why fix what isn't broken?") and to overhaul everything when things are going badly ("something needs to change"). Both impulses are wrong. The review cadence should be immune to recent results — because its job is to see through the noise of results to the signal of process.
The review isn't a response to problems. It's a practice that prevents them from compounding. Skip it when things are good, and you miss the slow degradation. Skip it when things are bad, and you miss the recovery insight.
Consistency is the compounding mechanism. Each review builds on the last. Miss one, and you lose the continuity. Miss three, and you've reset.
More writing
Omission as data — why what you didn't do matters more
The decisions you didn't make are invisible, unreviewed, and more expensive than the ones you did. How to track omissions and turn inaction into data.
Outcome-blind review: judging decisions without knowing results
Separating decision quality from outcome quality changes everything.