TL;DR
- The B2B SaaS NPS average is 30–36, but the only comparison that matters is your direct subcategory — an NPS of 40 could mean you're outperforming or quietly falling behind, depending on your peer set.
- A CSAT below 70% is a competitive disadvantage for B2B SaaS; the benchmark to aim for is the high 70s.
- SMS surveys lead on response rates (40–50%), followed by in-app (20–35%) and linked email (10–18%). Channel choice moves your numbers more than survey design does.
- The metric most product teams aren't tracking: the feedback-to-feature ratio. NPS and CSAT measure how customers feel. This one measures whether that's actually changing what you build.
Your NPS is 38.
Is that good?
Most product teams don't actually know. They track the number, watch it move quarter to quarter, compare it against last year. But they have no idea whether 38 means they're leading their category or quietly falling behind every direct competitor. That's the benchmark problem in a sentence: a number without a reference point isn't a metric. It's just data.
Benchmarks matter for two reasons. First, diagnosis: you can't tell if your feedback program is working without knowing what "working" looks like in your space. Second, target-setting: vague goals like "improve NPS" are useless without a number to aim at. A SaaS company with NPS 35 could be right at the industry median or significantly underperforming their subcategory. Without a benchmark, that number is almost meaningless.
This article covers 2026 benchmarks for NPS, CSAT, and survey response rates, organized by metric, with industry breakdowns and a practical framework for interpreting your own numbers against them.
Why Product Feedback Benchmarks Are Harder to Read Than They Look
Before you apply any of the numbers below, two problems are worth naming directly.
The first is methodology. A product team measuring NPS via in-app surveys will get different scores than one using email. Not because the product is better or worse, but because the channel, the timing, and the friction level change who responds and how. Enthusiastic users are more likely to complete a well-placed in-app survey. Frustrated users are more likely to click through a follow-up email. Scores from different channels aren't directly comparable. If you switched from email to in-app NPS this year, your historical benchmark has essentially reset.
The second problem is aggregation. "SaaS NPS average is 36" sounds specific until you realize SaaS includes CRM software, analytics tools, project management platforms, and developer tooling, spanning a 20-point range within the same vertical. Sybill's 2026 NPS benchmark research makes the point clearly: the useful comparison is against direct subcategory competitors, not the industry at large.
The right benchmark isn't the vertical average. It's your peers in your specific product category.
So use what follows as orientation, not verdict. Significantly below industry average? That's worth investigating. At or above? Still find your subcategory average before declaring success.
NPS Benchmarks by Industry (2026)
NPS (Net Promoter Score) has been the dominant product loyalty metric for over two decades, and the 2026 benchmark data shows a familiar pattern: wide variance across industries and even wider variance within them.
Here's the broad industry view:
| Industry | Median NPS | Notes |
| Manufacturing | ~65 | Consistently highest; tangible products, simpler interaction patterns |
| Technology / Professional Services | 60–66 | Strong relationship metrics; based on Retently 2025 data |
| Healthcare | 53–80 | Wide range depending on segment and measurement methodology |
| Consulting | ~51 | Relationship-based NPS tends to run high |
| Retail | ~50 | Consistent; driven by transactional simplicity |
| Banking / Hospitality | 41–44 | Complex service relationships pull scores down |
| B2B SaaS | 30–41 | Wide range; enterprise software sits at the higher end |
| Insurance | 23–80 | The widest variance of any industry tracked |
Sources: Lorikeet CX 2026, CustomerGauge SaaS NPS Benchmarks
For product teams, the B2B SaaS row is the one that matters. And the 30–41 range is deceptively broad, so here's what it actually breaks down to:
|
30–36 Industry Average |
40+ Above Average |
|
50+ Top Tier |
70+ World-Class |
Enterprise software skews higher (around 44) because longer contracts and deeper vendor relationships produce more loyal respondents. Pure-play SaaS targeting self-serve or SMB buyers tends to land closer to 30–36. That's not a product quality difference. It's a relationship depth difference.
B2B vs. B2C: Why the Gap Is Structural
According to Lorikeet's 2026 benchmarks, B2C companies average NPS of 49 vs. 38 for B2B. The gap exists for a structural reason. B2C interactions are shorter, more transactional, and simpler to resolve. B2B involves longer sales cycles, multiple stakeholders, complex onboarding, and support needs that drag scores down even when the product is genuinely strong. A SaaS tool with NPS 38 might be performing exactly as expected for its category and buyer type. For a fuller breakdown of how product feedback differs from broader customer feedback metrics, see product feedback vs. customer feedback.
But here's the nuance that matters most: a B2B SaaS NPS of 40 is above industry average. It's also potentially below your direct peer group if CRM tools in your price tier average 50. Same score, completely different verdict. This is why subcategory benchmarking isn't optional — it's the only comparison worth making. For teams using NPS alongside product-market fit measurement, the product-market fit survey guide covers how the 40% PMF threshold relates to your loyalty benchmarks.
CSAT Benchmarks for Product Teams (2026)
CSAT (Customer Satisfaction Score) measures something fundamentally different from NPS, and keeping that distinction clean is what makes benchmarking it useful.
NPS measures relationship health. Send it quarterly or semi-annually, at relationship moments: after onboarding, before renewal, at the six-month mark. CSAT measures whether a specific interaction went well: a support case, an onboarding call, a product update. Send it immediately after the event while the experience is still fresh.
For B2B SaaS, the CSAT benchmark to know: the high 70s. Industry data consistently puts 70% as the competitive floor. Below that, you're at a measurable disadvantage against peers. At 80%+, you're in top-quartile territory for the category.
A few things worth tracking alongside that number:
CSAT and NPS don't always move together. A company can have high NPS and weak CSAT in support. Customers love the product but find the help desk painful. If you're only watching NPS, that gap stays invisible until it starts showing up in churn data.
CES is consistently underused. Customer Effort Score ("how easy was it to resolve your issue?") is measured on a 1–7 scale. The cross-industry average sits around 5 out of 7, with survey response rates typically in the 25–30% range, closer to CSAT benchmarks than most teams expect. Standardized industry breakdowns don't exist the way they do for NPS, but CES consistently proves a stronger predictor of churn than CSAT. Teams that track it catch friction earlier than those relying on satisfaction scores alone. If you're only tracking CSAT on support interactions, you're missing the metric with the strongest retention signal.
Running both metrics together outperforms either alone. SurveySparrow found that programs combining NPS and CSAT see 44% average response rates vs. 10–15% for single-metric programs. You get better coverage and triangulation. A 4/5 CSAT score with a frustrated open-text comment means something different from a 4/5 with a neutral one. The NPS data helps you figure out which customers to watch most closely.
Combined Program Benchmark
Teams running NPS and CSAT together average 44% response rates vs. 10–15% for single-metric programs. If your current program is email-only and NPS-only, that gap is worth acting on.
For a broader view on what CSAT, NPS, and CES measure together, see the product feedback metrics overview.
Survey Response Rate Benchmarks by Channel
Response rates vary more by channel than by any other variable in your feedback program. Most teams don't realize how large the gap is until they see it in a table, and it's the gap most competitors' benchmark guides skip entirely.
Your NPS score is only as reliable as your response rate. A 5% response rate on 100 survey sends isn't a real NPS. It's the opinion of five people, likely skewed toward your most extreme detractors or most enthusiastic promoters. Statistical validity requires enough responses to actually represent your customer distribution.
Here are the 2026 response rate benchmarks by channel:
| Channel | Typical Response Rate | Notes |
| SMS surveys | 40–50% | Highest rates; contextual, frictionless, no click-through |
| In-app surveys | 20–35% | Higher for PLG products; lower for complex B2B |
| Email (embedded rating) | 15–25% | Embedded buttons meaningfully outperform linked surveys |
| Email (linked survey) | 10–18% | Most common delivery method; usually the worst-performing one |
| Website surveys | 8–15% | High friction, low context, lowest rates across all channels |
Source: Zonka Feedback NPS Response Rate Research
Note: B2B SaaS email response rates trend higher (18–25%) than the generic linked email benchmark, due to engaged user bases and product-led survey triggers. If you're a B2B SaaS product sending contextual email surveys, the 10–18% floor is conservative and 18–25% is a more accurate peer comparison.
The in-app advantage is real, and most teams still haven't acted on it. Companies moving from email to in-app NPS see 2x–10x improvement in response rates, according to Pendo research. The mechanism is straightforward: users are already in the product, the survey is contextually relevant, and there's zero click-through friction to drop off at. Email requires a customer to notice the email, open it, click through, and then respond. In-app asks for one tap. The conversion gap isn't surprising once you see it that way.
PLG vs. Sales-Led: Your Model Changes Your Benchmarks
PLG companies see 30–40% in-app NPS completion rates vs. 10–15% for email-based programs. Sales-led companies often see stronger per-survey participation even with lower raw volume, because their customers are more deeply invested in the relationship. The practical implication: match your collection channel to how customers actually use your product and how they expect to hear from you.
For a deeper look at response rate drivers by channel, the Zonka NPS response rate guide covers the specifics. What this article adds is the cross-channel benchmark view, so you can see how your current rate compares before deciding whether a channel change is worth it. If you're still evaluating which collection methods fit your program, the ways to collect product feedback guide covers the full range of options beyond surveys.
What's Actually Driving Low Response Rates
Four culprits show up repeatedly in programs that underperform their channel benchmarks:
Survey length over three questions for transactional sends. Drop-off accelerates fast after the third question in a post-event survey. For relationship NPS, you can run longer, but post-support CSAT should be one or two questions, not five.
Bad timing. Triggering a survey on login instead of after a user completes a meaningful action in the product. The survey needs to be anchored to something the customer just did, not to the act of showing up.
Generic, context-free invitations. No personalization, no reference to the interaction that triggered it, no indication that anyone is actually reading the responses. Customers respond more when the survey feels connected to something real.
Channel mismatch. Running email-first surveys to a mobile-heavy user base. The friction compounds when the channel doesn't fit the user's behavior pattern.
If your response rates are below the channel benchmarks, work through this list before assuming your customers don't want to give feedback. Most of the time, they do. The friction is just in the way.
The Metric Most Product Teams Ignore: Feedback-to-Feature Ratio
NPS tells you if customers would recommend you. CSAT tells you if specific interactions went well. CES tells you how much effort customers had to spend getting help.
None of them tell you whether any of it is actually changing what you build.
The feedback-to-feature ratio fills that gap. The concept is simple: how many pieces of customer feedback does it typically take before your product team justifies building or changing a feature?
A team with a low ratio is genuinely customer-driven. Feedback flows in, patterns surface fast, roadmap decisions follow. A team with a very high ratio collects data without acting on it — the feedback lives in a spreadsheet, gets reviewed at quarterly planning, and mostly gets deprioritized in favor of what the product org already wanted to build anyway. A team with a ratio of zero is guessing entirely.
How to Calculate Your Feedback-to-Feature Ratio
Over a quarter, track the number of unique feedback instances (feature requests, complaints, support patterns, usability flags) that preceded each roadmap decision. After two or three quarters, you have a denominator. The decisions made is your numerator.
Ratio = Feedback instances that drove decisions ÷ Total roadmap decisions made
A lower, more deliberate ratio means feedback is genuinely flowing into product decisions. A ratio trending toward zero means you're building reactively. A ratio nobody can calculate means the pipeline doesn't exist yet.
The number you want to trend toward is lower and more deliberate. Not zero: that would mean building every feature someone requests, which is reactive chaos. What you want is a ratio that reflects a real feedback-to-decision pipeline: feedback comes in, patterns emerge, relevant signals reach the product team, decisions follow. Not feedback sitting in a form nobody's reading.
Here's the distinction that matters for benchmarking: NPS and CSAT measure how customers feel right now. The feedback-to-feature ratio measures whether those feelings are changing what you build next. The second number is what actually determines whether your feedback program is producing anything beyond reports.
How to Interpret Your Benchmarks (Framework)
Before concluding whether your numbers are good or bad, run them through four questions.
1. Are you comparing against the right peer set?
SaaS is too broad a category to benchmark against. If you sell project management software to mid-market teams, your real benchmark is project management tools at your price tier — not the SaaS vertical average. Finding that number takes more research than looking up a CustomerGauge table, but it's the only comparison that actually means anything. An NPS of 40 can be excellent or mediocre depending entirely on who you're measuring against.
2. Are your response rates high enough to trust the scores?
Before you act on a benchmark number, check whether you have enough responses to trust it. A 5% response rate on 100 sends means 5 people answered. If those happen to be your most frustrated users, or your most loyal promoters, your NPS is dramatically skewed from your real number. Statistical validity isn't optional; it's the precondition for any benchmark comparison to be meaningful.
3. Are you comparing the same channel across time periods?
Email NPS and in-app NPS are different datasets even when they're measuring the same question. If you switched collection channels this year, your historical trend line has essentially reset. Plotting pre-switch email data next to post-switch in-app data and drawing conclusions from the movement is a category error. The channel change alone explains most of the score difference.
4. What's the trend, not just the number?
A B2B SaaS NPS of 35 that moved from 28 over six months is a stronger signal than a static 42. Improvement velocity tells you whether the underlying product experience is getting better, which is what customers ultimately respond to. Absolute position matters. But direction matters more, because direction tells you whether the operational changes you're making are actually working.
For more on building the measurement system that makes these comparisons consistent over time, see the product feedback strategy guide. If you need a ready-to-use starting point, the product feedback form template gives you a structured survey you can deploy immediately.
What Your Benchmarks Are Actually Telling You
A benchmark tells you where to look. The feedback behind it tells you what to fix.
The product teams that move their NPS by 15–25 points over 18 months don't get there by running better surveys. They get there by fixing the things the surveys point at: onboarding friction, support resolution speed, a specific workflow that's breaking at scale for a particular user segment. The survey captures the signal. The improvement comes from the operational change that follows.
This is why the interpretation framework in the previous section matters more than the benchmarks themselves. Not because the numbers are meaningless (they're not), but because a benchmark position is a starting question, not a final answer. A below-average NPS with a clear improvement trend and a genuine feedback-to-action pipeline is more valuable than an above-average NPS that nobody is acting on.
Conclusion
Benchmarks are only useful if you're honest about what they can and can't tell you. A number without the right peer comparison is noise. A strong score with a low response rate is statistically unreliable. And even a well-measured, above-average NPS means nothing if the feedback behind it isn't changing what your team builds or fixes.
The teams that get real value from product feedback benchmarks treat them as diagnostic inputs, not report card scores. They use benchmarks to ask better questions: why is our CSAT high but our CES weak? Why did our response rate drop when we switched channels? Why hasn't our NPS moved despite two quarters of product improvements? The benchmark surfaces the question. The work is in finding the answer.
For more on building the program that turns these numbers into decisions, the product feedback guide covers the full architecture, and the product feedback strategy guide covers how to structure measurement end-to-end.