The Metric Facade Problem: Why Your Dashboard Is Lying to You
When everyone's watching the same numbers, nobody's watching what actually matters. Most product teams have built elaborate stages for performances nobody asked for.
I was sitting in yet another quarterly review last week when someone pulled up the engagement dashboard. You know the one: daily active users trending up and to the right, retention cohorts looking healthy, feature adoption climbing. The VP nodded. The room relaxed. Nobody mentioned that our support tickets had doubled or that three enterprise customers were quietly evaluating competitors.
This happens everywhere. We’ve gotten really good at building dashboards that tell us we’re doing fine.
The thing is, metrics used to be diagnostic tools. Now they’re more like those fake security cameras in parking garages: visible, official-looking, and completely useless for their stated purpose. They exist to make people feel better about not knowing what’s actually happening.
I’ve watched this pattern play out at four different companies now. It always starts the same way. Leadership asks for better visibility. Product scrambles to instrument everything. Engineering builds the pipeline. Someone designs a beautiful dashboard in Looker or Amplitude or whatever. Then everyone promptly starts optimising for the wrong things because the dashboard only shows what’s easy to measure, not what’s important.
Here’s what actually happens: your “North Star” metric becomes the thing you game instead of the thing that guides you. Teams learn which levers to pull to make the line go up. They stop asking whether the line should be going up. They definitely stop asking what’s happening in the spaces between the metrics.
Take engagement. Please. Everyone tracks it. Nobody agrees on what it means. Is it sessions per user? Time in app? Features touched? Actions completed? The answer is always “yes, all of those” because choosing would require admitting what you’re actually trying to learn.
I talked to a PM last month who told me their activation metric was “completing onboarding.” When I asked what onboarding completion predicted, she paused. “Honestly? Just that someone finished the tutorial.” Turns out their best customers skipped onboarding entirely and went straight to the core workflow. But the metric was up 15% quarter over quarter, so leadership was happy.
This isn’t stupidity. It’s incentive design meeting human nature. When you’re judged on metric movement, you move metrics. When the CEO asks about the dashboard in the all-hands, you make sure the dashboard looks good for the all-hands. When OKR reviews focus on quantitative targets, you pick targets you can hit instead of outcomes you need.
The companies that figured this out, the ones doing actual discovery instead of metric theater, they treat dashboards differently. They use them like triage nurses use vital signs: quick checks that tell you where to look deeper, not definitive diagnoses. Spotify’s squad health checks weren’t about tracking numbers. They were conversation starters. Amazon’s working backwards docs force you to articulate the customer problem before anyone looks at a forecast.
Actually, that’s not quite right. Amazon still has plenty of metric theater. Everyone does. The difference is knowing which metrics are for show and which ones matter.
Most product teams have this backwards. They treat their public metrics (the ones in board decks and all-hands) as truth, and their messy qualitative signals (support tickets, sales call notes, user interviews) as nice-to-have context. It should be the opposite. The qual data tells you what’s happening. The quant data tells you how often.
But qualitative work doesn’t scale the same way dashboards do. You can’t set and forget a customer interview the way you can set and forget a Mixpanel board. There’s no automated pipeline for reading Slack screenshots of user confusion. Nobody gets promoted for manually triangulating signals from five different sources.
So we build dashboards instead. We instrument everything. We set up alerts and anomaly detection and weekly metric reviews. We create elaborate rituals around numbers that everyone knows are lagging indicators of things we should have noticed weeks ago.
The really insidious part? These dashboards create their own reality. Once you’re tracking something, teams orient around it. The metric becomes the goal. The dashboard becomes the product. You end up with organisations optimising for dashboard green-ness instead of customer value.
I’m not saying don’t use metrics. Use them. Just stop pretending they’re telling you the whole story. Stop treating them like objective truth instead of subjective choices about what to count. Stop rewarding people for hitting numbers without asking what had to break to make those numbers move.
Your engagement is up because you added a daily notification that 60% of users immediately dismiss. Your activation is up because you shortened onboarding to three steps, and now nobody understands the product. Your retention is up because you’re measuring 28-day retention instead of 90-day, and you stopped counting churned users in the denominator.
These aren’t hypotheticals. I’ve seen every single one in the last six months.
The fix isn’t better metrics. It’s being honest about what metrics can and cannot tell you. It’s spending as much time in support tickets and sales calls as you do in dashboards. It’s asking “what would have to be true for this metric to be misleading?” before you celebrate hitting your target.
It’s accepting that the most important things happening in your product probably aren’t showing up in your weekly metrics review. They’re happening in the gaps. In the workarounds your power users built. In the features they stopped using. In the complaints they stopped making because they stopped expecting you to fix them.
Your dashboard isn’t lying to you on purpose. It’s just showing you what you told it to show. The question is whether you’re still watching what matters, or just watching the screen.

