The Outcomes Delusion: Why the Thing Everyone Agrees On Keeps Failing in Practice
Product leaders have been preaching outcomes over outputs for a decade now. The sermon is correct. The congregation just can't figure out how to actually live it.
I sat in a product review last month where a VP proudly declared that the team would no longer be measured on features shipped. “We’re an outcomes-focused organisation now,” she said. “We care about customer value, not velocity.” The room nodded. Everyone felt virtuous.
Six weeks later, that same VP sent a Slack message asking why the roadmap looked “light” for Q2. The team had been running customer interviews and analysing retention cohorts. They hadn’t shipped anything visible. The outcomes focus lasted exactly until someone needed something to show the board.
This is the practicality gap that nobody wants to talk about. The outcomes versus outputs debate was settled philosophically years ago. Of course you should measure customer and business value. Of course shipping features that don’t move metrics is pointless busywork. Of course the goal is retention, revenue, engagement, satisfaction, not story points completed.
And yet.
The Attribution Problem Nobody Solved
Here’s the thing about outcomes: they’re lagging indicators that often take months to materialise, and when they do, good luck figuring out which work actually caused them.
Your retention improved by 4% this quarter. Was it the onboarding redesign that shipped in January? The pricing change from November finally taking effect? The competitor who imploded and sent you refugees? The macroeconomic shift that made customers less likely to churn anywhere? Some combination of all four that you’ll never untangle?
I’ve watched product teams spend entire offsite sessions trying to attribute outcome changes to specific initiatives. The honest answer is usually “we’re not sure, but we have theories.” That’s intellectually honest. It’s also completely useless when your finance partner needs to justify headcount.
Loom figured this out with what they call “Video First View” as their activation metric. A new customer isn’t considered fully activated until they create their first video and it gets at least one view within a week of signing up. That’s a good leading indicator. It’s measurable, attributable, and tied to a behaviour they can actually influence.
But most teams don’t have metrics that clean. Most teams are working with outcomes that involve dozens of variables, long feedback loops, and stakeholders who need answers faster than the data can provide them.
The Morale Thing Nobody Mentions
There’s another dimension to this that gets conveniently ignored in the outcomes discourse: people need to feel like they’re making progress.
I talked to an engineering lead at a fintech company that went fully outcomes-focused eighteen months ago. Teams stopped tracking velocity. They stopped celebrating releases. Everything was oriented around moving customer metrics.
“By month four, people were miserable. We’d work for weeks on something, ship it, and then... nothing. The metric wouldn’t move, or it would move and we couldn’t tell if it was us. There was no sense of accomplishment. No rhythm. Just waiting.”
They eventually reintroduced some output tracking. Not as the primary measure of success, but as what he called “progress heartbeats.” Ways for the team to feel forward motion while waiting for the outcomes to reveal themselves.
This is the part that gets left out of the thought leadership. Humans are not infinitely patient creatures who can delay gratification for quarterly business reviews. They need smaller wins. They need to see the thing they built go live. They need someone to say “good work” more often than once per OKR cycle.
The Stakeholder Translation Problem
Actually, that’s not quite right. The morale issue is real, but there’s something deeper happening here.
Most product teams don’t operate in isolation. They exist within organisations that have sales teams making promises, executives making forecasts, and board members asking questions. Those stakeholders often don’t care about your carefully constructed outcome metrics. They want to know what’s shipping and when.
A PM at a B2B SaaS company told me about her quarterly business review:
“I walked in with this beautiful presentation about how our NPS had improved and our support ticket volume had dropped. The CEO’s first question was ‘what features shipped this quarter that I can tell customers about?’ He didn’t want outcomes. He wanted a list.”
You can argue that the CEO was wrong. You can say he should care more about customer satisfaction than feature announcements. But arguing with reality doesn’t change it. The PM still had to produce a feature list. The outcomes story was a nice addition, not a replacement.
This is the translation problem. Outcomes are how product teams should think about their work. Outputs are often how the rest of the organisation needs to hear about it. Pretending that gap doesn’t exist, or that you can simply educate stakeholders into caring about retention curves, ignores how most companies actually function.
The Uncomfortable Middle Ground
So where does this leave us?
I think the honest answer is that most teams need both, and the ratio depends on context in ways that resist prescription.
Early-stage products need more output focus because you’re still learning what outcomes even matter. You’re shipping to discover, not shipping to move a known metric. Mature products can lean more heavily into outcomes because the feedback loops are established and the attribution is cleaner. Teams with supportive leadership can be more outcomes-focused than teams reporting to executives who want feature counts.
The real skill, and I’m still working on this myself, is knowing when to shift between the two framings. When to tell the outcome story and when to tell the output story. When to push back on “what shipped this quarter” and when to just answer the question.
Product Focus suggests using what they call leading indicators: metrics like the proportion of customers taking the standard proposition rather than a customised solution, or the number of customers actively engaged in testing and pilots. Product Focus These sit somewhere between pure outcomes and pure outputs. They’re things teams can influence directly, they move faster than lagging business metrics, and they correlate (hopefully) with the outcomes you actually care about.
That’s probably the right direction. Find the intermediate measures that bridge the gap. But I’ve yet to see a company that’s fully cracked this. Everyone’s still fumbling toward something that works for their specific situation.
The Part I Haven’t Resolved
Here’s what I keep coming back to. The outcomes movement was a necessary correction to years of feature factory thinking. Teams shipping roadmap items that nobody used, celebrating velocity while customers churned, conflating busyness with value.
But the correction overcorrected. It created a new orthodoxy where admitting you track outputs feels like confessing to a product management sin. Where the “right” answer in any discussion is always outcomes, even when outcomes aren’t practically measurable or attributable in your context.
The best PMs I know hold both ideas simultaneously. They genuinely care about customer and business value. They also track what shipped this sprint and feel good when the release goes out. They think in outcomes and communicate in outputs, or vice versa, depending on the audience.
That’s not intellectual inconsistency. That’s just the job.

