Pre-Mortems vs. Post-Mortems
Product teams treat failure analysis as a single discipline. But the pre-mortem and post-mortem serve fundamentally different purposes, and conflating them weakens both.

I sat through a post-mortem last year where a team spent ninety minutes documenting why a feature launch had gone sideways. Root causes identified. Action items assigned. Lessons learned captured in a Notion doc that would never be opened again.
Three months later, same team, different feature, nearly identical failure mode.
When I asked whether the previous post-mortem had surfaced this risk, someone pulled up the document. There it was, on page two: “Risk of insufficient QA coverage in edge cases.” Action item: “Improve test coverage.” Status: “In progress.”
The team had done the post-mortem. They’d identified the problem. They just hadn’t actually changed anything.
The Pre-Mortem Promise
Gary Klein introduced the pre-mortem concept in the late 1990s, and it’s elegantly simple. Before you launch, before you commit, gather the team and ask: “Imagine we’re six months in the future and this project has failed. What went wrong?”
The psychology is clever. Prospective hindsight, as researchers call it, increases people’s ability to identify reasons for outcomes by about 30%. Asking “what could go wrong” triggers defensive optimism. Asking “what did go wrong” unlocks different cognitive pathways.
The strategic argument for pre-mortems is compelling. You surface risks while there’s still time to mitigate them. You give permission for skepticism in a culture that often rewards false confidence. You create a structured moment for the person who’s been quietly worried to actually voice that worry.
The pre-mortem’s value isn’t prediction accuracy. It’s permission to speak before speaking becomes criticism.
I’ve run pre-mortems that genuinely changed project trajectories. A team about to launch a pricing change realised they hadn’t modelled how existing customers on annual contracts would react. An infrastructure migration surfaced a dependency nobody had mapped. These weren’t exotic risks. They were obvious in retrospect, but nobody had created space to articulate them.
The Post-Mortem Reality
But here’s what pre-mortem advocates sometimes gloss over: you cannot actually know why something failed until it fails.
Pre-mortems surface the risks people can imagine. Post-mortems reveal the risks that actually materialised, which are frequently not the same thing.
I’ve watched teams run thorough pre-mortems that identified fifteen potential failure modes, then fail for a sixteenth reason nobody anticipated. The pre-mortem created false confidence. “We thought about this carefully. We identified the risks.” But the thing that killed them wasn’t on the list.
Knight Capital’s famous 2012 trading disaster is instructive. They lost $440 million in 45 minutes due to a deployment error that reactivated old code. No pre-mortem would have surfaced “someone will deploy to seven servers instead of eight, leaving legacy code active on one machine.” The failure mode was too specific, too contingent on a particular sequence of human decisions.
The post-mortem, by contrast, could trace exactly what happened. And that tracing revealed systemic issues, deployment processes that allowed partial rollouts, insufficient monitoring, inadequate rollback procedures, that a pre-mortem might have gestured toward but couldn’t have specified.
Pre-mortems imagine failure. Post-mortems dissect it. These are not the same skill, and they don’t produce the same insights.
The Translation Problem
Here’s where I think both camps get stuck.
The pre-mortem is a strategy tool. It operates in the realm of possibility, of adjustable plans, of risks that can still be mitigated. It’s forward-looking and generative.
The post-mortem is an execution archaeology tool. It operates in the realm of what actually happened, of specific decisions and their consequences, of reality rather than projection.
The strategy-execution gap shows up when organisations treat these as interchangeable or, worse, as competing approaches. “We do pre-mortems now, so we don’t need post-mortems” misunderstands what each one offers.
The translation layer failure is subtler. Pre-mortems identify risks, but translating “identified risk” into “changed behaviour” requires something more than documentation. It requires someone to actually do something different. And post-mortems identify causes, but translating “identified cause” into “systemic change” faces the same gap.
Both rituals can become theatre. The pre-mortem where everyone lists risks that get captured and ignored. The post-mortem where everyone agrees on root causes that never get addressed. The document exists. The learning doesn’t transfer.
What Actually Distinguishes Useful Practice
I’ve been trying to figure out what separates teams that genuinely learn from failure from teams that just perform the rituals.
The pattern I keep noticing: it’s not about which method they use. It’s about whether anyone has authority to act on what emerges.
A pre-mortem that surfaces a risk nobody can address is just anxiety documentation. “We might not have enough QA coverage” means nothing if there’s no capacity to add QA coverage. The exercise becomes a way to say “we told you so” later rather than a way to change outcomes.
Similarly, a post-mortem that identifies root causes nobody will fix is just collective grief processing. Useful for morale, maybe, but not for learning.
The teams I’ve seen get value from these practices are the ones where the facilitator can ask: “Okay, we’ve identified this risk. What are we actually going to do about it? Who decides? What’s the timeline?” And then someone in the room has the authority to answer.
Amazon’s post-mortem process reportedly works partly because it escalates to people who can actually authorise systemic changes. It’s not a team exercise in reflection. It’s an input to decision-making at a level where decisions get implemented.
The Uncomfortable Synthesis
I’ve gone back and forth on which matters more.
Pre-mortems feel proactive. They position learning as forward-looking, optimistic even. We’re smart enough to anticipate problems. We’re mature enough to voice concerns. We’re capable of adjusting before it’s too late.
Post-mortems feel reactive. Something broke. We’re doing the responsible thing by understanding why. But there’s always a slight flavour of closing the barn door.
Actually, that framing is probably wrong. It’s not about which is more valuable in the abstract. It’s about which constraint your organisation actually faces.
Some teams fail because they don’t create space for pre-launch skepticism. The roadmap is the roadmap. Concerns are career-limiting. The pre-mortem addresses a real gap for these teams.
Other teams fail because they don’t do rigorous causal analysis after things go wrong. They have opinions about what happened, but nobody traces the actual sequence of decisions. The post-mortem addresses a different real gap.
And some teams, honestly, do both rituals and learn from neither because the findings don’t connect to anyone with authority to act.
The Question Worth Sitting With
Maybe the distinction between pre-mortems and post-mortems matters less than a different question: does your organisation actually change based on what it learns?
I keep meeting teams that have elaborate learning rituals and no learning. The documents exist. The processes run. Nothing changes.
And occasionally I meet teams with barely any formal process that genuinely adapt based on experience. They don’t write post-mortems. They just... fix things when they break, and adjust plans when risks surface.
The ritual isn’t the learning. The ritual is just a container that might enable learning if the organisational conditions are right.
Whether you imagine failure beforehand or dissect it afterward, the question is the same: will anyone do anything differently as a result?
I’m not sure the answer depends on which ritual you choose.

