Discovery Debt: The Invoice Nobody Wanted to Open
Your roadmap is full of features built on guesses nobody validated. The bill for all that skipped research is coming due, and nobody's sure whose budget it comes out of.
There’s a moment in every product audit where you find the feature. The one that took a quarter to build, that nobody uses, that solves a problem the team assumed existed but never actually verified.
Every company has at least one. Most have dozens.
I was reviewing a product’s analytics last month and found a reporting module that had taken four engineers three months to ship. Usage data showed eleven people had opened it in the past year. Eleven. Two of them were the PM and the designer who built it.
When I asked about the discovery work behind it, I got a familiar answer. “We had customer requests.” Which, when I pushed, turned out to be two enterprise clients who mentioned it in passing during renewal calls. Nobody had validated whether reporting was actually the problem. Nobody had checked if those two clients represented a broader need. Nobody had tested whether the solution they were building would even address what those clients meant.
This is discovery debt. And it’s everywhere.
The compound interest problem
Technical debt gets talked about constantly. Everyone understands the concept. You take shortcuts in code, they accumulate, eventually you pay them back or the system collapses under its own weight.
Discovery debt works the same way, but it’s harder to see. Every feature you ship without validating the problem creates a liability. Every assumption you encode into your product without testing it adds weight. Every bet you make based on intuition rather than evidence compounds.
The difference is that technical debt shows up in your codebase. Discovery debt shows up in your retention curves, your support queues, your feature adoption metrics. It’s diffuse. It’s delayed. By the time you notice it, you’re three years into a product full of things nobody wanted.
“Technical debt slows down your engineering. Discovery debt means you’re efficiently building the wrong things.”
I’ve started asking teams a simple question: for each major feature you shipped last year, can you show me the research that informed it? The win rate is troubling. Maybe one in five can produce anything beyond “we had a hypothesis” or “customers were asking for it.”
The backlog graveyard
Here’s where it gets uncomfortable. Go look at your backlog. Not the active stuff. The bottom. The tickets that have been sitting there for eighteen months, slowly sinking under the weight of newer priorities.
That’s your discovery debt made visible.
Every one of those items represents something someone thought was important enough to capture but nobody thought was important enough to validate. They accumulate. They create a false sense of completeness. They make your roadmap planning slower because you’re constantly scrolling past ghosts of ideas past.
But the real debt isn’t in the backlog. It’s in what you already shipped.
I worked with a team that had a feature set they called “the museum.” Twelve interconnected capabilities that had been built over two years, used by almost nobody, but so entangled with the core product that removing them would take longer than building them did. They were paying maintenance costs, cognitive load costs, onboarding complexity costs, all for features that existed because someone once had a hunch.
The debt wasn’t the backlog items they hadn’t built. It was the shipped features they couldn’t remove.
The quantification trap
There’s a growing movement to measure discovery debt the way we measure technical debt. Story points of unvalidated assumptions. Risk scores on shipped features. Confidence intervals on roadmap items.
I understand the impulse. If you can’t measure it, you can’t manage it. Finance wants numbers. Leadership wants dashboards. “We have discovery debt” doesn’t get budget approved. “We have 847 story points of unvalidated assumptions representing £2.3M in potential rework” might.
But I’m sceptical of this approach. Discovery debt isn’t really quantifiable in meaningful terms. How do you assign a number to “we don’t actually know if users need this”? How do you score the risk of a feature that might be fine or might be fundamentally misconceived?
“Not everything that counts can be counted. Discovery debt is one of those things.”
The attempt to quantify often becomes a distraction from the actual work. Teams spend weeks building frameworks for measuring debt instead of doing the discovery that would reduce it. The spreadsheet becomes the deliverable.
What I’ve seen work better is simpler. Flag the shipped features with lowest adoption. Identify the upcoming roadmap items with weakest evidence. Make a list. Start at the top. Do the research you should have done before building.
Less satisfying than a dashboard. More likely to help.
The ownership vacuum
Here’s the question that’s causing the most friction: who fixes this?
Product managers created much of the debt by shipping without validating. But they were often responding to pressure from leadership who wanted velocity over rigour. And they were working within systems that rewarded output over outcomes.
UX researchers, where they exist, might seem like natural owners. But most discovery debt predates their involvement. And asking research to audit and remediate years of PM decisions creates organisational tension that helps nobody.
Engineering increasingly wants a seat at the discovery table. Fair enough. But they’re also the ones being asked to maintain the features that shouldn’t have been built, which creates a different kind of frustration.
I’ve watched this debate consume entire quarters. The energy spent arguing about ownership is energy not spent actually reducing debt.
My view, for what it’s worth, is that discovery debt is a team sport to fix even if it wasn’t a team sport to create. The PM who shipped the unvalidated feature two years ago might not even work there anymore. The researcher who could have caught it might not have been hired yet. Assigning blame is satisfying but useless.
What matters is whether you’re going to keep accumulating or start paying down. That’s a decision for the whole product team, owned by whoever has the authority to protect time for it.
The uncomfortable truth
Here’s what nobody wants to say. Most products have so much discovery debt that addressing it properly would mean admitting that large portions of what exists shouldn’t.
That’s organisationally terrifying. Careers were built on those features. Promotions were earned shipping them. OKRs were hit. The sunk cost fallacy has tenure.
So teams do a softer version. They commit to “doing more discovery going forward” while leaving the existing debt untouched. They validate new features while ignoring the validated-by-nobody features already in production.
This is better than nothing. But it’s not fixing the problem. It’s deciding to stop digging while standing in a hole.
I don’t have a clean answer here. The debt is real, the causes are systemic, and the remediation is painful. Some of it might never get paid down. The features will just sit there, slowly becoming more expensive to maintain, until someone finally kills them or the product gets sunset entirely.
Maybe that’s fine. Maybe some debt you just live with.
But I’d feel better about that conclusion if more teams were at least looking at the invoice. Most haven’t even opened the envelope.

