Why Your Product Metrics Might Be Lying to You — and How to Fix Them
- SunAi Murugan
- Oct 22
- 12 min read
You're staring at your dashboard. The numbers look great. Downloads are up 300%. Engagement is climbing. Revenue is growing. Your team is celebrating.
But three months later, your product is bleeding users and nobody knows why.
Sound familiar? You're not alone. Product metrics can be incredibly misleading when you don't understand what they're really telling you—or more importantly, what they're hiding.
Let's dive into the five critical categories of product metrics, the dangerous lies each one can tell, and how to uncover the truth hiding beneath the numbers.
The Hidden Danger in Your Metrics
Before we begin, here's the uncomfortable truth: every metric can lie to you if you're measuring the wrong thing or looking at it in isolation.
The companies that succeed—YouTube, Facebook, Twitter—don't just track metrics. They understand the complete story their metrics tell, including the plot holes. They know that growth without retention is vanity, engagement without happiness is smoke and mirrors, and revenue without understanding its source is a house of cards.
Let's examine each metric category, see how it can deceive you, and learn how to catch these lies before they sink your product.
1. Growth and Activation: The Vanity Trap
The Metrics
Total new users per month/week
New users by source (SEO, app stores, ads, referrals)
Activated users (those who complete a meaningful first action)
The Lie: "We're Growing!"
The Deceptive Scenario:
You're running FitTrack, a fitness app. Your March dashboard shows:
50,000 downloads (up from 10,000 last month!)
35,000 from App Store search
10,000 from Instagram ads
5,000 from referrals
Your CEO is thrilled. "We're scaling! Let's double the ad budget!"
The Hidden Truth
But wait. Let's dig deeper:
Of those 50,000 downloads, only 20,000 created accounts
Of those 20,000 accounts, only 8,000 logged their first workout
Your real activation rate is 16% (8,000 ÷ 50,000)
What happened to the other 42,000 people? They downloaded your app and immediately abandoned it. You're paying to acquire users who will never use your product.
Even worse: When you analyze by source:
App Store organic: 25% activation rate (8,750 of 35,000)
Instagram ads: 5% activation rate (500 of 10,000)
Referrals: 40% activation rate (2,000 of 5,000)
The Fix
Stop celebrating downloads. Celebrate activation.
The Instagram ads bringing you 10,000 downloads with 5% activation are actually delivering just 500 real users at a much higher effective cost. Meanwhile, referrals have 40% activation—these are quality users who were recommended by friends and already understand the value.
What to do:
Redefine success: Track "activated users" as your north star, not downloads
Segment by source: Calculate activation rate per channel
Cut losers fast: That Instagram ad campaign? Kill it or completely redesign it
Double down on winners: Invest in referral programs and organic discovery
Fix the funnel: Why do 60% of people who download never create an account? Test your onboarding ruthlessly
Real example: Dropbox famously shifted from paid ads (which had terrible activation) to their referral program, which gave both referrer and referee extra storage. Their activation rates soared because referred users already understood the product's value.
2. Retention: The Leaky Bucket Illusion
The Metrics
Retained users (returning week/month over week/month)
Resurrected users (inactive users who return)
Retention rate percentage
The Lie: "Our Users Love Us!"
The Deceptive Scenario:
Your FitTrack dashboard in April shows:
January: 20,000 activated users
February: 12,000 returned (60% retention)
March: You resurrected 3,200 of the 8,000 who left (40% resurrection rate)
Current active users: 15,200
Your team celebrates: "We brought back 40% of churned users! Our retention strategy works!"
The Hidden Truth
Let's look at the complete picture:
Month 1 (January):
20,000 activated users
Month 2 (February):
12,000 retained (60%)
8,000 lost (40% churn)
Month 3 (March):
7,200 retained from February (60% of 12,000)
3,200 resurrected from January
4,800 lost from February (40% of 12,000)
Total: 10,400 active users from your original cohort
You've lost 9,600 users (48%) in just two months. You're celebrating resurrection rates while your bucket has massive holes.
Even worse: Those resurrected users? Track their behavior separately. Often they churn again at even higher rates because they already decided once that your product wasn't valuable.
The Fix
Stop patching holes with resurrection. Fix the leak.
Resurrection campaigns feel productive—you're "winning back" users. But you're spending resources (emails, notifications, maybe even discounts) to bring back people who already told you they don't find value in your product.
What to do:
Cohort analysis is everything: Track each month's users separately. Don't let new user growth mask retention problems
Calculate true retention: Follow a single cohort (January's 20,000 users) month by month. If only 8,000 are still active in June, your 6-month retention is 40%—not the 60% your monthly numbers suggested
Find the drop-off point: When do users churn? After Day 3? After Week 2? That's where your product fails to deliver value
Interview churned users: Actually talk to people who left. "Why did you stop using FitTrack?" Their answers are gold
Measure resurrection separately: Track resurrected users as their own cohort. If they churn again at 70%, stop wasting resources on resurrection and fix retention instead
Real example: Slack obsessively tracks their "2,000 message milestone"—teams that send 2,000 messages have 93% retention. They don't focus on resurrecting teams who never hit this threshold; they focus on getting active teams to 2,000 messages faster.
3. Engagement: The Activity Trap
The Metrics
Average actions per user (posts, likes, views)
Session length/frequency
Feature usage rates
The Lie: "Users Are Highly Engaged!"
The Deceptive Scenario:
You're running SnapMoment, a photo-sharing app. Your April metrics are phenomenal:
Average user posts 15 photos per week (up from 4!)
Average session time: 45 minutes (up from 15!)
Average user gives 100 likes per week (up from 25!)
You ship a report to executives titled "Engagement Explodes 300%!"
The Hidden Truth
Two months later, you've lost 30% of your users. What happened?
You dig into the data and discover:
80% of posts come from just 5% of users (power users posting obsessively)
The average 45-minute session? It's bimodal: 70% of users spend 5 minutes, while 10% spend 4+ hours
Those 100 likes per week? Bots and spam accounts you haven't detected yet
90% of your "normal" users actually reduced their activity
Your engagement metrics spiked because of power users and fake accounts, while your core user base was quietly leaving.
The Fix
Stop looking at averages. Look at distributions.
Averages lie when distributions are uneven. If 9 people earn $30,000/year and 1 person earns $1,000,000/year, the "average" income is $127,000—which represents exactly nobody.
What to do:
Segment your users: Separate power users (top 10%), core users (middle 60%), and casual users (bottom 30%)
Track engagement by segment: Is engagement growing for core users or just power users?
Look for distribution changes: Are more users becoming inactive? Is the middle falling out?
Define healthy engagement: YouTube counts a "view" as 30+ seconds because shorter views don't indicate real interest. What's your threshold for meaningful engagement?
Watch for fake activity: Sudden engagement spikes often indicate bots, spam, or gaming of your system
Balance depth and breadth: Having 1,000 users spend 2 hours each is very different from having 10 users spend 200 hours each
Real example: Twitter discovered that showing "while you were away" summaries increased session frequency but decreased session depth—users checked in more often but spent less total time. They had to decide which mattered more for their business model (ads favor total time).
Facebook found that "engagement" from angry reactions led to toxic environments and churn, despite high activity. They adjusted their algorithm to prioritize "meaningful" engagement over total engagement.
4. User Happiness: The Satisfaction Paradox
The Metrics
Net Promoter Score (NPS): Likelihood to recommend on 1-10 scale
App store ratings
Customer service complaint volume
The Lie: "Our Customers Are Satisfied!"
The Deceptive Scenario:
You're running TuneStream, a music service. Your June happiness metrics look solid:
NPS: +25 (positive is good, right?)
App Store rating: 4.2 stars
Customer complaints: 200 per month (only 2% of users)
Leadership sees this and decides not to invest in customer experience improvements.
The Hidden Truth
Six months later, a competitor launches. Within 60 days, you lose 40% of your subscribers. How did this happen if customers were "satisfied"?
Let's decode what your metrics were actually saying:
NPS Score of +25:
This means 25% more promoters than detractors
But "passives" (scores of 7-8) don't count in the formula
Your real breakdown: 35% promoters, 45% passives, 20% detractors
45% of your users are neutral—they'll leave the moment something better appears
4.2 Star Rating:
Sounds decent, but in app stores, anything below 4.5 is considered problematic
Users typically only rate after extreme experiences (very good or very bad)
Your 4.2 means you have a lot of angry users who took time to rate you poorly
The vast majority never rate at all—they just quietly churn
200 Complaints:
Only 2% of unhappy users actually complain—the rest just leave
200 complaints likely represents 10,000+ users with problems
What are the complaints about? "App crashes when downloading songs"
You've known about this bug for 4 weeks and haven't prioritized it
The Fix
Satisfied customers leave too. You need loyal, passionate customers.
The difference between "satisfied" and "loyal" is the difference between surviving and thriving. Satisfied customers are one good competitor away from churning.
What to do:
Treat NPS scores properly:
Anything below +50 is mediocre (yours is +25)
Focus on converting passives (7-8 scores) to promoters (9-10)
Actually call your detractors and ask what went wrong
Context matters for ratings:
A 4.2 rating for a cable company might be industry-leading
A 4.2 rating for a social app is a death sentence
Compare against competitors, not in a vacuum
Every complaint is the tip of an iceberg:
Multiply complaint volume by 50x to estimate true problem scale
Track complaint categories: Are 70% about the same issue?
Measure time-to-resolution: Are complaints sitting for weeks?
Track happiness over time:
Is your NPS declining month-over-month? You're in trouble
Did a recent update cause a rating drop? Roll it back immediately
Measure differently:
Ask: "How disappointed would you be if this product disappeared tomorrow?" (Very/Somewhat/Not)
Track: Are users following you on social media, joining your community, creating content?
These indicate passion, not just satisfaction
Real example: Apple's iPhone has always commanded premium prices not because customers are "satisfied" but because they're loyal advocates who actively promote the product. Their NPS consistently exceeds +70.
Cable companies often have NPS scores below zero—more detractors than promoters—yet customers stay because there's no alternative. They're satisfied with the necessity, not the service. The moment Google Fiber comes to town? Mass exodus.
5. Revenue: The Profitability Mirage
The Metrics
Lifetime Value (LTV): Revenue per customer over their lifetime
Customer Acquisition Cost (CAC): Cost to acquire one customer
Monthly Recurring Revenue (MRR): Total subscription revenue per month
Annual Recurring Revenue (ARR): MRR × 12
The Lie: "We're Making Money!"
The Deceptive Scenario:
You're running CloudVault, a cloud storage service. Your Q2 revenue report is spectacular:
MRR: $500,000 (up 67%!)
ARR: $6,000,000
10,000 total subscribers
LTV: $360 (customers stay 24 months at $15/month average)
CAC: $20
LTV:CAC ratio: 18:1 (investors love this!)
Your board approves a massive increase in marketing spend to scale growth.
The Hidden Truth
Nine months later, you're burning through cash and the board is demanding answers. What went wrong?
Let's examine what your metrics were hiding:
The LTV Calculation Was Wrong:
You calculated LTV based on historical data: customers who signed up 24+ months ago
Those early customers were enthusiasts who found you organically
Your recent customers (from paid acquisition) are churning after 8 months
Real LTV for new customers: $120 (8 months × $15)
The CAC Calculation Was Incomplete:
Your $20 CAC only counted ad spend
It didn't include: sales team salaries, free trial credits, onboarding support, referral bonuses
Real CAC: $85
Your Actual Ratio:
Real LTV:CAC = $120:$85 = 1.4:1
You're losing money on every new customer
As you scaled marketing, you scaled your losses
The MRR Growth Was Masking Churn:
New subscribers: +3,000/month
Churned subscribers: -2,000/month
Net growth: +1,000/month
You celebrated the growth while ignoring that 67% of new subscribers only replaced churned ones
Your churn rate is 20% per month—most customers leave after 5 months
The Fix
Stop celebrating revenue. Celebrate profitable, sustainable revenue.
Growth that requires constant spending to maintain isn't growth—it's a treadmill. The moment you stop running (spending on acquisition), everything collapses.
What to do:
Calculate LTV honestly:
Use recent cohorts (last 6 months), not all-time data
Segment by acquisition channel—organic vs paid LTV can be radically different
Account for churn rates realistically
Include expansion revenue (upgrades) and contraction (downgrades)
Calculate CAC completely:
Include all costs: ads, salaries, tools, free trials, incentives
Calculate by channel separately
Include failed acquisition attempts (you paid for clicks that didn't convert)
Know your target ratio:
3:1 is healthy for sustainable growth
Below 3:1 means you're growing unprofitably
Above 10:1 means you should be spending MORE on acquisition
Track MRR cohorts:
Don't just track total MRR—track MRR by cohort
January's cohort: Started at $50K MRR, now at $30K (40% churn)
This shows the health of your revenue, not just the size
Monitor churn separately:
Calculate monthly churn rate (churned subscribers ÷ starting subscribers)
Industry benchmark for SaaS is 5-7% annual churn (0.42-0.58% monthly)
Anything above 5% monthly is a crisis
Understand unit economics:
Gross margin per customer (revenue minus direct costs)
Payback period (how long to recover CAC from revenue)
If your payback period is 24 months but customers only stay 8 months, you never break even
Real example: MoviePass famously offered unlimited movies for $10/month—their revenue looked great with hundreds of thousands of subscribers. But each customer cost them $20-30/month in theater fees. Their LTV was negative. The business model was fundamentally broken despite growing revenue.
Conversely, Amazon Prime loses money on the subscription itself but increases LTV through additional purchases. They understood the complete unit economics.
The Biggest Lie: Looking at Metrics in Isolation

Here's the most dangerous deception: Every metric lies when viewed alone.
Let's return to FitTrack one final time and see how looking at metrics in isolation creates a false narrative:
Q1 in Isolation:
Growth: 50,000 downloads (seems great!)
Retention: 60% (above 50% benchmark!)
Engagement: 3 workouts/week (users are active!)
Happiness: NPS of +25 (positive score!)
Revenue: MRR of $300K, LTV:CAC of 22:1 (profitable!)
Board meeting conclusion: "We're crushing it! Let's scale!"
Q1 Complete Picture:
Growth: 50K downloads but only 16% activation—84% immediately abandon
Retention: 60% monthly retention = 12.9% annual retention—you lose 87% of users within a year
Engagement: 3 workouts/week average, but 60% of users log zero workouts (power users skew the average)
Happiness: NPS of +25 with 45% passives—nearly half your users don't care enough to recommend you
Revenue: LTV:CAC of 22:1 based on old data—new cohorts show 3:1 (barely profitable)
Board meeting conclusion: "We have fundamental product-market fit problems. Scaling now would accelerate our failure."
How to See the Complete Truth
1. Create a Metrics Framework
Track all five categories together:
Growth AND activation (not just downloads)
Retention AND resurrection (with separate cohort tracking)
Engagement distribution (not just averages)
Happiness trends (not just snapshots)
Revenue economics (complete unit economics)
2. Segment Everything
Never look at aggregate data without segmentation:
By user type (new vs returning, power vs casual)
By acquisition channel (organic vs paid, by specific source)
By geography (US users may behave differently than international)
By device (iOS vs Android, mobile vs desktop)
By cohort (January users vs February users)
3. Compare to Benchmarks
Your metrics mean nothing without context:
Compare to competitors (where possible)
Compare to industry standards
Compare to your own historical performance
Compare across different segments
4. Ask "So What?"
For every metric, ask:
So what does this actually mean for user behavior?
So what action should we take based on this?
So what happens if this trend continues?
If you can't answer these questions, you're tracking vanity metrics.
5. Follow the Money and the Users
Two questions cut through every lie:
Are we retaining users who genuinely love our product?
Are we making more money from customers than it costs to acquire and serve them?
If the answer to both is "yes," you're probably fine. If either is "no," your other metrics don't matter.
Real-World Examples of Metrics Lying
Zynga (FarmVille):
The lie: Massive daily active users and engagement (millions playing daily)
The truth: Engagement was driven by spammy notifications and psychological tricks, not genuine enjoyment
The result: When Facebook changed notification rules, engagement collapsed—users didn't actually like the product
Snapchat Spectacles:
The lie: 220+ million units sold in first year, massive revenue growth
The truth: Most purchases were from speculators and hype, not genuine users. Actual daily usage was below 10%
The result: $40 million write-down on unsold inventory, product discontinued
Blue Apron:
The lie: Rapid subscriber growth, high engagement (people cooking meals)
The truth: CAC was $94, LTV was only $320—barely profitable. Churn was 70% within 6 months
The result: Stock price collapsed 90% as the unit economics became clear
WeWork:
The lie: Billions in revenue, rapid expansion, high occupancy rates
The truth: "Community adjusted EBITDA" excluded their actual costs. Each customer cost more to acquire and serve than they generated in revenue
The result: Failed IPO, near bankruptcy, $47B valuation to $9B
How to Fix Your Metrics (Action Plan)
Week 1: Audit Your Current Metrics
List every metric you currently track
For each metric, write down: "What decision would change if this metric changed?"
If you can't answer, stop tracking it
Week 2: Build Your Framework
Set up tracking for all five categories (growth, retention, engagement, happiness, revenue)
Create dashboards that show these together, not in isolation
Add segmentation to every metric
Week 3: Calculate True Unit Economics
Real LTV (recent cohorts only)
Complete CAC (all costs)
LTV:CAC ratio
Payback period
Monthly churn rate
Week 4: Start Cohort Analysis
Track January 2025 users monthly: how many are still active?
Track retention by acquisition channel
Track engagement distribution, not averages
Week 5: Interview Users
Talk to 10 power users: Why do they love your product?
Talk to 10 churned users: Why did they leave?
Talk to 10 passive users: What would make them passionate advocates?
Week 6: Make Decisions
Which acquisition channels should you kill?
Which features drive retention vs engagement?
Which user segment should you focus on?
What's the one thing holding back growth?
The Bottom Line
Your product metrics are lying to you. Not intentionally—metrics don't have agency. But they're lying through:
Omission: Showing growth while hiding churn
Aggregation: Averaging away the truth about user distributions
Confusion: Mixing vanity metrics with actionable metrics
Isolation: Telling you about revenue without context of cost
Staleness: Showing you historical patterns that no longer apply
The solution isn't to stop tracking metrics—it's to track them properly:
Completely (all five categories)
Honestly (with true costs and realistic projections)
Granularly (with segmentation and cohorts)
Contextually (with benchmarks and distributions)
Actionably (with clear decisions tied to each metric)
Your metrics should tell you the truth, even when it's uncomfortable. Especially when it's uncomfortable. Because the longer you believe the lies, the harder the truth becomes to face.
Start asking harder questions of your data. Your product—and your company—will thank you.


