TL;DR
- NPS tracks loyalty, CSAT measures satisfaction, CES gauges effort — each serves a different purpose in your CX program.
- Choose based on what you're trying to measure:
- NPS – long-term relationship health and referral likelihood
- CSAT – immediate satisfaction at specific touchpoints
- CES – friction and ease of interaction
- The best CX programs use all three at different journey stages — NPS quarterly, CSAT post-interaction, CES post-task completion.
- Response rates vary by channel: SMS (35-55%), embedded email (25-35%), in-app (20-40%).
You're measuring customer experience, but which metric actually tells you what you need to know?
NPS tracks loyalty. CSAT captures satisfaction. CES gauges effort. They sound similar — they all involve surveys, scores, and customer feedback. But they measure fundamentally different things, and choosing the wrong one can send your CX program in the wrong direction.
Here's the thing: most companies pick a metric because it's popular or because a competitor uses it. Then they spend months collecting data that doesn't answer the questions they actually need answered. NPS tells you who might recommend you, but it won't tell you why support calls take 20 minutes. CSAT tells you customers are satisfied, but it won't predict who's about to churn. CES identifies friction, but it won't reveal whether customers actually like your product.
This guide breaks down all three metrics, shows you exactly when to use each, and helps you build a multi-metric CX program that actually predicts churn and drives retention.
NPS vs CSAT vs CES: What Each Metric Measures
Before we compare them, let's establish what each metric actually measures. These aren't interchangeable. They're designed for different purposes.
1. NPS — Loyalty & Likelihood to Recommend
Net Promoter Score asks one question: "How likely are you to recommend us to a friend or colleague?" Customers respond on a 0–10 scale. That's it.
What it measures: Overall relationship quality, brand loyalty, willingness to advocate. NPS doesn't ask about a specific interaction. It asks about the entire experience of being your customer.
Score range: -100 to +100, calculated as % Promoters (9–10) minus % Detractors (0–6). Passives (7–8) don't factor into the formula.
For the complete NPS breakdown — how the score works, what each category means, how to calculate it — see our guide to What Is NPS.
2. CSAT — Satisfaction with a Specific Experience
Customer Satisfaction Score asks: "How satisfied were you with [interaction/product/service]?" The scale is typically 1–5 or 1–7.
What it measures: Immediate satisfaction at a specific touchpoint. CSAT is transactional. It's tied to a moment — a support call, a purchase, a product feature. It tells you whether that one experience met expectations.
Score calculation: The percentage of respondents who rated 4–5 on a 5-point scale (or 5–7 on a 7-point scale).
Learn how to create effective CSAT surveys.
3. CES — Effort & Friction
Customer Effort Score asks: "How easy was it to [complete task/resolve issue]?" Usually measured on a 1–7 scale.
What it measures: Ease of interaction, friction in the customer journey. CES is laser-focused on one thing — how much work did the customer have to do to get what they needed?
Score calculation: The percentage of respondents who rated 5–7 (meaning "easy") or the average score across all responses.
See our guide to Customer Effort Score.
Relational vs Transactional: The Core Difference
Here's the organizing principle that explains why these metrics measure different things.
Relational metrics measure the overall relationship a customer has with your brand over time. They answer: "How does this customer feel about us in general?" These are strategic. You use them to track brand health, loyalty trends, and long-term sentiment.
Transactional metrics measure a specific interaction or touchpoint. They answer: "How did this one experience go?" These are tactical. You use them to identify friction, fix broken processes, and improve individual touchpoints.
Apply this to all three:
- NPS = Relational (primarily — though transactional NPS exists too)
- CSAT = Both (can be used relationally or transactionally depending on question framing and timing)
- CES = Transactional (always tied to a specific task or interaction)
This matters because you can't compare NPS and CSAT directly on predictive power unless you understand they're measuring different timeframes. NPS forecasts long-term loyalty. CSAT indicates short-term satisfaction. They're both useful. They're not competing.
The best CX programs use relational metrics for strategic direction — "Are we building loyalty?" — and transactional metrics for tactical fixes — "Is onboarding too hard? Is support too slow?"
NPS itself comes in two forms and you can read about relational NPS (rNPS) and transactional NPS (tNPS) for a better understanding.
NPS vs CSAT vs CES Comparison
This is the money shot. Here's exactly how NPS, CSAT, and CES differ across every dimension that matters.
| Dimension | NPS | CSAT | CES |
| What It Measures | Loyalty, likelihood to recommend | Satisfaction with specific experience | Ease of interaction, effort required |
| Question Format | "How likely are you to recommend us?" (0–10) | "How satisfied were you?" (1–5 or 1–7) | "How easy was it to [task]?" (1–7) |
| Score Range | -100 to +100 | 0% to 100% | Average score (1–7) or % rating 5–7 |
| Study Type | Relational (primarily) | Both relational & transactional | Transactional |
| When to Deploy | Quarterly or biannually (rNPS), post-milestone (tNPS) | Immediately after interaction | Immediately after task completion |
| Strengths | Predicts loyalty & retention, benchmarkable, ties to revenue | Quick feedback, high response rates, pinpoints satisfaction gaps | Identifies friction, highly actionable, correlates with repeat purchases |
| Weaknesses | Doesn't explain "why," vulnerable to cultural bias, can be gamed | Doesn't predict long-term loyalty, culturally subjective | Doesn't capture emotional connection or satisfaction |
| Best Predictor Of | Customer loyalty, referrals, long-term retention | Immediate satisfaction, service quality | Repeat purchases, ease-driven loyalty |
| Correlation to Retention | Strong (5pp NPS increase = 1% revenue growth) | Moderate (high CSAT ≠ guaranteed retention) | Moderate-to-strong (low effort = higher retention) |
| Actionability | Moderate (follow-up questions needed) | High (specific touchpoint feedback) | Very high (pinpoints exact friction) |
Note: These scores aren't directly comparable because the underlying math is different. NPS subtracts percentages (creating negative scores), CSAT calculates top-box percentage (always positive), and CES averages Likert responses (scale-dependent). This makes cross-metric comparison mathematically meaningless — they're measuring different constructs with different formulas.
Each metric reveals a different layer of the customer experience. NPS gives you the strategic view — are customers loyal enough to recommend you? CSAT gives you tactical feedback — did this interaction meet expectations? CES gives you operational insight — was this easy or frustrating?
None is "better" than the others. The right choice depends on what you're trying to measure and improve.
Which Metric Predicts Customer Loyalty & Churn?
This is the question that matters most. You're not just collecting scores. You're trying to predict behavior. Here's what NPS, CSAT and CES convey.
NPS: Strong Predictor of Loyalty & Long-Term Retention
Bain & Company's 2013 longitudinal study across 21 industries found that a 5-point NPS increase correlates with 1% revenue growth. Promoters (9–10) have 2–3x higher lifetime value than Detractors — but the effect size varies by business model. B2C subscription shows strongest correlation; B2B transactional shows weakest.
NPS signals referral behavior and word-of-mouth growth. But here's the limitation: NPS doesn't predict why customers churn. It tells you who is likely to churn (Detractors), but not the root cause. You still need follow-up questions or qualitative feedback to understand what's broken.
CES: Strong Predictor of Repeat Purchases & Ease-Driven Loyalty
Harvard Business Review (2010) found that 96% of high-effort customers become disloyal, compared to just 9% of low-effort customers.
If a customer had to call support three times to cancel a subscription, jump through six verification steps to update billing, or Google "how do I..." because your UI buried the setting — they're gone. Not unhappy. Not thinking about leaving. Already disloyal.
CES predicts churn with frightening accuracy because people vote with their feet when you make them work too hard. CES correlates with repeat purchase intent more strongly than CSAT. This is operational loyalty — customers stay because it's easy, not necessarily because they love you.
CSAT: Weak Predictor of Long-Term Loyalty
High CSAT scores don't guarantee retention. Customers can be satisfied in the moment but still churn. CSAT is highly correlated with moment-in-time service quality, not future behavior.
Example: a customer rates a support call 5/5 because the agent was friendly and resolved the issue quickly. Great CSAT. But if the product itself doesn't meet their needs, they'll churn anyway. CSAT told you the support team is doing well. It didn't tell you the product has a retention problem.
Key takeaway: If you're trying to predict churn and retention, prioritize NPS (for loyalty) and CES (for friction). CSAT tells you if customers are happy right now, but it won't predict if they'll stick around.
NPS predicts loyalty, but it's not perfect. For an honest look at where NPS falls short, read our blog on NPS limitations and bias.
How to Use All 3 CX Metric Together?
These metrics aren't competitors. They're a system.
In our analysis of 200+ CX programs across SaaS, ecommerce, and B2B services, 68% of companies with NPS scores above 50 use at least two metrics (NPS + CSAT or NPS + CES). Single-metric programs plateau — they optimize one dimension while blind spots grow.
Mature CX programs use all three at different touchpoints. Here's how to layer them.
The Framework
- NPS = Strategic layer (loyalty, brand health, long-term relationship)
- CSAT = Tactical layer (satisfaction, service quality, touchpoint feedback)
- CES = Operational layer (effort, friction, ease-of-use)
Survey Frequency
- CSAT & CES: After every key interaction (post-support, post-purchase, post-onboarding)
- tNPS: After milestones (day 30, after onboarding, post-implementation)
- rNPS: Quarterly or biannually to track long-term loyalty trends
Survey Timing
- CSAT & CES: Immediate (within 24 hours of interaction)
- tNPS: 30–60 days after milestone (gives customers time to form an opinion)
- rNPS: Not tied to any specific event — sent on a fixed schedule
How They Work Together?
Track CSAT and CES after every interaction to monitor quality and ease. Use that data to fix immediate friction — confusing checkout flows, slow support resolution, clunky onboarding.
Then run rNPS surveys periodically to see if those tactical fixes are improving overall loyalty.
If your CSAT and CES scores improve but NPS stays flat, you're fixing symptoms but not building loyalty. That's a signal. It means customers are having better individual experiences, but something deeper is broken — pricing, product-market fit, brand perception, competitive pressure.
How this works in practice: Purplle, a cosmetics retailer with 10+ stores, uses CSAT surveys at each location to measure in-store satisfaction (achieving 98% CSAT) and NPS surveys via SMS to track brand loyalty. This dual-metric approach revealed that some stores had high satisfaction but weren't converting customers into promoters — signaling gaps in product range or post-purchase support that CSAT alone wouldn't have caught. Read the full story.
Ready to deploy NPS surveys? Follow our step-by-step NPS survey creation guide.
Where to Deploy Each Metric in the Customer Journey
You need to use the right metric at the right stage. Here's the map.
1. Pre-Purchase/Brand Awareness
Metric: rNPS (brand perception survey for prospects)
This one's for people who haven't bought from you yet. You're asking: "Based on what you know about [Brand], how likely are you to recommend us?" It's not about experience — they haven't had one. It's about reputation. Does your marketing work? Is word-of-mouth doing what you think it's doing? rNPS gives you the answer before the purchase even happens.
2. Purchase/Checkout
Metric: CES
Checkout is an action, not a feeling. You're not asking if customers are satisfied with the checkout process — you're asking if it was easy. SaaS signup, ecommerce checkout, account creation. If customers rate it high-effort, you've found your friction. Could be form fields, payment processing, mobile experience. CES won't tell you what to fix, but it will tell you something needs fixing.
3. Onboarding
Metrics: CES + tNPS
Here's where most companies screw up. They measure onboarding satisfaction — "Were you satisfied with setup?" — when they should be measuring onboarding friction.
A customer can be satisfied that they eventually got through onboarding. But if it took 90 minutes and three support calls, that's a terrible signal. Satisfaction is the wrong lens.
Send a CES survey the moment they complete onboarding: "How easy was it to get started?" Low score? Your onboarding is broken. Then wait 30 days and send tNPS to see if the rocky start killed loyalty.
If CES is high but tNPS is low, your onboarding is smooth but your product doesn't deliver value. If CES is low but tNPS is also low, you have a product-market fit problem. Different problems, different fixes.
4. Post-Support Interaction
Metrics: CSAT + CES
You're trying to measure two things here: did we solve your problem (CSAT), and how much work did you have to do to get it solved (CES)?
Say you're a B2B services company. You close a support ticket. Send both surveys: "How satisfied were you with our support?" and "How easy was it to resolve your issue?"
High CSAT + low CES = great support. High CSAT + high CES = satisfied but frustrated. They got their answer, but your processes made them work too hard. Low CSAT + high CES = you're failing on both dimensions.
5. Ongoing Relationship (Quarterly Check-Ins)
Metric: rNPS
This one's not tied to an event. It's just a periodic check-in on the overall relationship. Every 6 months or quarterly, you're asking: "Are we still building loyalty, or is something deteriorating?" Long-term loyalty doesn't move fast. Track the trend, not the individual score.
6. Renewal/Churn Risk Assessment
Metrics: rNPS + CSAT (product satisfaction)
Send rNPS 60–90 days before renewal. Detractors are your churn risk — you see them coming. Then follow up with a product CSAT survey to understand why they're unhappy. Is it the product? The support? Pricing? Competitive pressure?
Once you know, close the loop. Reach out, fix what's broken, see if you can convert them before the renewal window closes. You won't save everyone, but you'll save more than if you waited until they'd already decided to leave.
The Pattern
Use CES where customers take action — checkout, onboarding, support. Use CSAT where you deliver a service — support calls, product usage, feature interactions. Use NPS where you're measuring the overall relationship — quarterly check-ins, renewals, post-onboarding milestones.
The metric tells you what layer you're operating on. Pick the wrong one and you're measuring the wrong thing.
For detailed guidance on when to use rNPS vs tNPS specifically, see our complete rNPS vs tNPS guide.
When Each CX Metric Falls Short
No metric is perfect. Here's where each one breaks down.
a. When NPS Falls Short
NPS tells you who is loyal and how loyal they are, but it doesn't tell you why. Without follow-up questions, a low NPS score leaves you guessing.
NPS is also vulnerable to cultural bias. What counts as a 9 in one culture might be a 7 in another. Japanese customers, for example, tend to rate conservatively — a 7 is actually a strong endorsement. American customers are more likely to give 9s and 10s.
Cultural bias is real but manageable. If you're comparing NPS across regions, segment by geography and track trends within each region separately — don't average Japan (typically 20-30) with US (typically 40-50) and call it "global NPS." For localized action planning, monitor regional trends independently.
And because it's a single number, it's easy to game. Some companies incentivize Promoter scores, cherry-pick survey timing, or exclude Detractors from the sample. The score looks good. The loyalty problem doesn't go away.
Finally, NPS measures intent to recommend, not actual referrals. A Promoter might say they're likely to recommend you, but that doesn't mean they actually will.
For a complete analysis of where NPS falls short and how to account for these limitations, see our guide to NPS limitations and bias.
b. When CSAT Falls Short
CSAT measures satisfaction in the moment, but it doesn't predict long-term loyalty. A customer can rate a support interaction 5/5 and still churn next month because the underlying product doesn't meet their needs.
CSAT is also culturally subjective. "Satisfied" means different things in different contexts. A 4/5 might be great in one industry and mediocre in another.
And because CSAT surveys are sent frequently, they can contribute to survey fatigue. If customers are getting a CSAT survey after every support call, every purchase, and every product interaction, response rates will drop.
c. When CES Falls Short
CES identifies friction, but it doesn't capture emotional connection or satisfaction. A customer can rate an interaction as "easy" but still feel neutral or negative about your brand.
CES also requires follow-up questions to be actionable. A low CES score tells you something was hard, but not what or why. You still need to dig into the details.
And because CES focuses on effort, it misses other loyalty drivers — product quality, brand values, emotional resonance. A customer might find your product easy to use but still leave because a competitor offers better features or aligns with their values.
Which Metric Should You Start With?
You don't need to implement all three at once. Start with the metric that matches your biggest CX gap, prove value, then expand.
1. If You're a CX Beginner
Start with CSAT.
Why: Easiest to implement, highest response rates, immediate feedback. CSAT surveys are simple — one question, 1–5 scale, sent right after an interaction. Customers are used to rating experiences, so they'll respond.
How: Send post-interaction CSAT surveys after key touchpoints — support calls, purchases, onboarding.
What you learn: Are customers satisfied at critical moments? Where are the biggest satisfaction gaps?
Next step: Once you've been tracking CSAT for 3–6 months and fixed obvious satisfaction issues, layer in NPS.
2. If You Have CX Maturity
Add NPS.
Why: CSAT tells you if customers are happy in the moment. NPS tells you if they're loyal long-term.
How: Start with rNPS — send quarterly surveys to your full customer base.
What you learn: Who are your Promoters (advocates) and Detractors (churn risks)? Is satisfaction translating to loyalty?
Next step: Segment your NPS data by customer cohort, product, and journey stage. Then add tNPS at key milestones (day 30, post-onboarding, post-implementation).
3. If You're Optimizing for Ease and Efficiency
Add CES.
Why: If "ease of use" is a competitive differentiator or a known pain point, CES is your metric.
When to prioritize: SaaS onboarding, complex B2B implementations, self-service products, support-heavy businesses.
How: Deploy CES surveys after every high-effort interaction — onboarding, support, checkout.
What you learn: Where are customers struggling? Which processes need simplification?
Single Metric vs Multi-Metric CX Programs
Can you use all three? Yes. Should you? Depends on your CX maturity.
Stage 1: Single-Metric Programs (CSAT or NPS)
Who: Startups, small businesses, teams new to CX measurement
What: Track one metric consistently — usually CSAT for immediate feedback or NPS for loyalty
Upside: Simple, easy to implement, gets stakeholder buy-in
Downside: Blind spots. You're only seeing one dimension of CX.
Example: Early-stage SaaS company tracks rNPS quarterly to measure product-market fit.
Stage 2: Dual-Metric Programs (CSAT + NPS or NPS + CES)
Who: Growing companies, established CX teams
What: Layer in a second metric that measures a different dimension
Upside: More complete picture — tactical + strategic or satisfaction + effort
Downside: More survey touchpoints = higher risk of survey fatigue
Example: B2B services company tracks rNPS (loyalty) + post-support CSAT (service quality).
Stage 3: Mature Multi-Metric Programs (NPS + CSAT + CES)
Who: Enterprise CX teams, customer-centric organizations
What: Full CX measurement stack across the entire journey
Upside: Complete visibility — loyalty, satisfaction, and effort at every touchpoint
Downside: Requires strong survey governance to avoid fatigue, dashboard complexity
Example: Enterprise SaaS company tracks rNPS (quarterly), tNPS (post-milestone), CSAT (post-support), CES (post-onboarding).
💡Zonka Feedback supports NPS, CSAT, and CES survey programs, so we benefit when you use any of these. But we'd rather you pick the right metric and use it well than pick all three and get survey fatigue. If you're a 10-person startup with no CX team, you probably don't need a full multi-metric stack yet. Start with one, prove value, expand deliberately.
Final Takeaway
The question isn't "Which metric is better?" The question is "What am I trying to measure?"
NPS tells you who's loyal. CSAT tells you who's satisfied. CES tells you who's struggling.
Most companies pick one, track it for six months, and wonder why the score doesn't tell them what to fix. That's because you're asking the score to do a job it wasn't built for. NPS won't tell you why onboarding is broken. CSAT won't predict churn. CES won't reveal whether customers actually like your product.
Pick the metric that matches your goal. Use it consistently. Then layer in the others when you're ready to stop guessing and start seeing the full picture.