Discovery Theatre: When User Research Becomes Strategy’s Alibi
Something strange happens when discovery findings always align with roadmap plans. You're still talking to users, still synthesising insights, but you've stopped learning anything that matters.
There’s a Slack thread I keep thinking about. Engineering lead asks product: “Why are we building this?” PM responds: “We validated it in discovery.” Engineering: “Right, but why this problem?” PM, slightly defensive now: “We literally just spent three weeks talking to users.”
Nobody asks the obvious follow-up: validated it against what?
This is discovery theatre. It’s everywhere. And it’s quietly rotting the connection between strategy and execution.
The Pattern Nobody Names
Here’s what actually happens: leadership sets a direction (let’s say “improve retention”). Product translates that into a roadmap theme. Then comes discovery. A few weeks later, you’re in a review meeting, and someone’s presenting research that coincidentally supports exactly what was already on the roadmap.
Funny how that works.
I’ve watched this play out at four companies now. Different industries, different maturity levels, same dynamic. Discovery becomes the thing you do to prove you did the thing, not the thing that helps you figure out what thing to do.
The tell is in the artefacts. When discovery documentation reads like a closing argument instead of an investigation report, you’re watching theatre.
What’s Really Happening Beneath the Surface
The problem isn’t that teams are lying. Most aren’t. They genuinely believe they’re following best practice: talk to users, identify problems, build solutions. The process looks right.
But look at the incentives. Leadership wants certainty. Stakeholders want commitments. Engineering wants stable plans. Everyone’s optimising for something other than learning.
So discovery mutates. It stops being “what should we do?” and becomes “here’s why what we’re already doing makes sense.” The research questions narrow. The participant selection skews. The synthesis focuses on signal, ignores noise. Not maliciously, just gravitationally. The roadmap has mass. Discovery orbits it.
I’m not sure this is even conscious most of the time.
The real damage isn’t the wasted research effort. It’s the feedback loop you’ve just severed. When discovery can’t falsify your assumptions, it can’t guide your strategy. You’re flying blind while insisting you can see.
The Alibi Pattern
There’s a specific shape to this. Leadership announces a strategic priority. Product interprets it (usually too literally). They identify initiatives that feel appropriately strategic. Then they run discovery.
But here’s the thing: if discovery was genuinely informing strategy translation, you’d see these teams change course sometimes. Deprioritise the big initiative. Challenge the strategic framing. Come back with “actually, the real problem is adjacent to what we thought.”
When’s the last time you saw that happen?
Instead, discovery becomes proof of rigour. “We validated this with 12 users.” What you validated was whether users experience the problem you already decided to solve. That’s not strategy translation. That’s confirmation bias with a user interview guide.
The worst version is when teams run discovery after engineering has already started building. I’ve seen this. Multiple times. At companies you’ve heard of. The discovery work happens in parallel, surfaces inconvenient truths, gets ignored because “we’re too far down the path to change now.”
At that point you’re not even pretending. You’re just checking a box.
Strategy Translation vs. Strategic Justification
Real strategy translation starts with discomfort. You take a strategic direction (”own the enterprise market” or “become the workflow hub”) and you sit with how impossibly vague that is. Then you generate hypotheses. Actual testable hypotheses with falsifiable predictions.
If we believe owning enterprise means solving compliance anxiety, then we’d expect to hear procurement teams raising security questions before pricing. We’d see deals stalling at certain review stages. We’d find champions who can’t get budget approval.
Discovery tests those hypotheses. And crucially (crucially) it’s designed to break them.
When discovery is strategy’s alibi instead, the hypotheses are backwards-engineered. You start with the solution, infer the problem it solves, then go find users experiencing that problem. Research design becomes an exercise in confirming your roadmap makes sense.
The smoking gun is what doesn’t make it into the findings. All the contradictory signals. The users who said the problem wasn’t actually painful. The people who pointed at different issues entirely. That stuff gets coded as “edge cases” or “not our target segment.”
Except sometimes those edge cases are trying to tell you something.
The Company Size Trap
This gets worse as companies scale. Smaller teams can get away with informal discovery: someone talks to three customers, learns something, adjusts. The feedback loop is tight. Discovery and strategy blur together naturally.
But at 200 people? At 800? You need process. So you build it. Discovery becomes a phase. It gets templates, cadences, review meetings. You hire researchers. You create documentation standards.
And somewhere in that formalisation, you lose the ability to say “wait, this doesn’t make sense.”
I watched a team at a scale-up run an eight-week discovery initiative. Proper qualitative research, quantitative validation, the works. They came back with findings that directly contradicted the roadmap assumption. Leadership response? “Interesting. Let’s keep that in the backlog and revisit next quarter.”
The roadmap didn’t change. Because the roadmap wasn’t actually a hypothesis. It was a commitment they’d already made to the board.
That’s when discovery stops informing strategy and starts performing it.
The Testable Hypothesis Problem
Most teams aren’t actually working with testable hypotheses. They think they are. They’ll say things like “we believe users need better collaboration features.” That’s not a hypothesis. It’s a statement of intent wearing a hypothesis costume.
A testable hypothesis looks like: “If we reduce comment latency below 200ms, we’ll see threaded discussions increase by 30% and session depth grow by 15%, because the current delay breaks conversational flow and people give up.”
That can be falsified. You can discover you’re wrong about the latency threshold. Wrong about which conversations it affects. Wrong about whether flow matters.
When discovery is genuinely connected to strategy, it’s trying to falsify these things. You’re actively looking for disconfirming evidence. The goal is to get less wrong before you build.
But strategy theatre doesn’t want falsifiable hypotheses. It wants confirmable narratives. So you get vague problem statements that any research can support: “Users are frustrated with the current workflow.” Well, yes. Users are always somewhat frustrated. What specifically? Which workflow? Frustrated how?
Doesn’t matter. It’s on the roadmap.
What This Reveals About How We Work
The real issue isn’t that product teams are bad at discovery. Most teams I’ve worked with are genuinely trying to learn from users. The issue is organisational pressure creates a reality distortion field.
When you’re expected to deliver on quarterly commitments, when your performance review depends on hitting roadmap milestones, when leadership has already told the board what’s coming, discovery stops being genuine inquiry. It can’t be. The system won’t let it.
I keep coming back to this: discovery theatre exists because uncertainty is professionally dangerous. If you run discovery that genuinely questions the roadmap, you’re creating problems. You’re slowing things down. You’re introducing doubt. In most organisations, that’s not rewarded. It’s punished.
So teams adapt. They run discovery that looks rigorous but can’t threaten the plan. They ask questions with predetermined answers. They synthesise findings that align with stakeholder expectations.
And then they wonder why their products miss the mark.
The Uncomfortable Truth
Part of me thinks this is worse than not doing discovery at all. At least when you’re just building from gut instinct, everyone knows you’re guessing. There’s honesty in that.
But discovery theatre creates false confidence. You’ve talked to users. You’ve got data. The findings deck looks professional. Leadership feels informed. You’ve de-risked the roadmap.
Except you haven’t. You’ve just created better documentation for why you’re still guessing.
The question that keeps nagging at me: is this fixable within most organisational structures? Or does real discovery (the kind that genuinely informs strategy) require a level of organisational courage that most companies simply don’t have?
I don’t have a clean answer to that. Which might be the most honest thing I can say about it.

