TL;DR
- Net Promoter Score (NPS) measures customer loyalty with one question: "How likely are you to recommend us?" on a 0-10 scale
- Respondents split into three groups: Promoters (9-10), Passives (7-8), and Detractors (0-6)
- The formula: % Promoters − % Detractors = NPS, producing a score from -100 to +100
- NPS is benchmarkable, operationally simple, and Bain & Company's research found that industry NPS leaders grow at more than twice the rate of competitors
- The score tells you whether customers are loyal. The open-ended follow-up tells you why. Neither is useful without the other
- NPS doesn't replace other customer experience metrics. Pair it with CSAT or CES for a complete picture
Satisfied customers leave all the time. They fill out your survey, give you a 7, and two weeks later they're trying out a competitor. The support ticket got resolved. The order showed up on time. Everything was fine.
And "fine" is exactly the problem. It doesn't predict whether someone stays. It doesn't predict whether they mention you to a colleague. It definitely doesn't predict whether they'll pick you again when a better offer shows up in their inbox next Tuesday.
That gap between "satisfied" and "loyal" is what Net Promoter Score was designed to catch. It's a single-question metric that two-thirds of Fortune 1000 companies now use, and it's been around since 2003. Not because it's flawless. Because it gives teams a standardized, benchmarkable signal of customer loyalty that's simple enough to actually get used. And that last part matters more than most people think.
This guide covers what NPS is, how the formula works, what the score categories actually mean, where the metric falls short, and when you should (and shouldn't) start measuring it. For a broader look at how NPS fits into your overall customer experience strategy, our complete Net Promoter Score guide covers the full picture.
What Is Net Promoter Score?
Net Promoter Score (NPS) is a customer loyalty metric that measures how likely your customers are to recommend your company, product, or service to someone else. One question. A 0-10 scale. A score between -100 and +100.
That's the mechanics. But here's why it matters.
Introduced in 2003 by Fred Reichheld at Bain & Company, NPS was built to capture something that customer satisfaction surveys kept missing. CSAT measures whether someone was happy with a specific moment. NPS measures whether they'd stake their reputation on you. Think about that difference for a second. When someone recommends your product to a friend, they're putting their credibility on the line. That's not satisfaction. That's loyalty. And those are very different things.
NPS is both a metric and a process. The metric is the score. The process is everything around it: collecting responses, sorting customers into groups, following up with detractors, giving promoters a reason to advocate, and feeding what you learn back into the business. The number by itself? Just a number. What you do with it is where the value lives.
And the reason NPS caught on across industries isn't that it's the most statistically rigorous metric ever designed. It's that teams actually use it. One question. One score. You can send it via email survey, in-app pop up, SMS, or web embed. Compare that to the forty-question satisfaction surveys that get planned for months and never shipped. NPS surveys actually go out. Response rates hold up. And you end up with useful NPS data across your customer base instead of a spreadsheet nobody opens.
How the NPS Question and Scale Work
The standard NPS survey question reads:
"On a scale of 0 to 10, how likely are you to recommend [company/product/service] to a friend or colleague?"
Why 0 to 10? Because the 11-point scale captures differences that smaller scales can't. Someone who rates you a 7 is saying "sure, it's okay." Someone who rates you a 10 is saying "absolutely, I already told three people." A five-point scale treats both of those as the same response. They're not.

A standard NPS survey has two parts. The rating question comes first. Then an open-ended follow-up: "What's the primary reason for your score?" The first part gives you the number. The second part gives you the story. And honestly, most of the useful information in any NPS program comes from that second question. The score tells you something moved. The comment tells you what.
Surveys typically go out via email, in-app, SMS, or web. The question stays the same across all of them. What changes is when and where you ask, and that decision shapes your data quality more than most teams realize.
Promoters, Passives, and Detractors: What Each Group Actually Does
Every NPS response sorts into one of three groups. And these aren't arbitrary buckets someone invented for neatness. They map to real behavioral patterns that Bain & Company's research has tracked across industries for over two decades: how people spend, whether they renew, and what they say about you to others.
| Promoters (9-10) | Passives (7-8) | Detractors (0-6) | |
| What they do | Refer friends, renew contracts, try new products, leave positive reviews | Stay if nothing better appears, don't actively recommend, switch when given a reason | Cancel, complain publicly, discourage others from buying |
| Revenue behavior | Higher lifetime value, lower cost to serve, organic acquisition channel | Moderate lifetime value, susceptible to competitor offers | Higher support costs, churn risk, negative word of mouth |
| What they signal | Product-market fit is strong in this segment | Satisfaction without enthusiasm. Your product meets expectations but doesn't exceed them | Something is broken: could be product, service, pricing, or expectations |
| Common mistake | Assuming they'll promote without being asked | Ignoring them because they "seem fine" | Treating them as lost causes instead of diagnostic opportunities |
Promoters account for more than 80% of referrals in most businesses. They repurchase more often and cost less to serve. The data on loyalty behavior backs this up: 66% of US consumers spend more on brands they're loyal to, and 59% actively refer brands they trust. Enthusiastic customers don't just stick around. They bring new customers with them.
But here's the part that trips most teams up. They spend all their energy on unhappy customers and completely ignore passive customers. Passives are actually the biggest swing group in your entire customer base. Their repurchase and referral rates run as much as 50% lower than promoters. They're satisfied enough not to complain. Not loyal enough to stay when something better shows up. That "7" rating isn't a compliment. It's a shrug. And every competitor in your space is actively targeting those shrugs right now.
NPS also works as an early warning system. A detractor response doesn't just mean someone's unhappy. It means they're evaluating alternatives, they might tell others to stay away, and your customer service team has a narrow window to intervene before that customer is gone for good. That's why running NPS surveys regularly matters. Not as a quarterly checkbox, but as a continuous signal that tracks changes in customer churn risk and customer sentiment over time.
Also read How to respond to detractors and recover at-risk customers
How Is NPS Calculated?
NPS = % Promoters − % Detractors
Simple enough. But there's one thing that consistently confuses people: passives count toward your total respondents, but they don't show up in the formula. They affect your percentages (because they're in the denominator), but they're not directly added or subtracted. If you've ever stared at the math and wondered why it doesn't seem to add up to 100%, that's why.
The NPS score ranges from -100 (literally every customer is a detractor) to +100 (every customer is a promoter). In practice, most real-world scores land somewhere between -10 and +70.
You can also use an online NPS calculator to quickly know your Net Promoter Score.
Worked NPS Calculation with Sample Data
Say you've just launched a product update and you survey 200 customers to see how it landed.
| Group | Count | Percentage |
| Promoters (9-10) | 120 | 60% |
| Passives (7-8) | 50 | 25% |
| Detractors (0-6) | 30 | 15% |
| NPS | 45 |
The math: 120 ÷ 200 = 60% promoters. 30 ÷ 200 = 15% detractors. 60% − 15% = 45.
That means significantly more loyal customers than unhappy ones. In most industries, a 45 is a strong position. But context changes everything. A 45 in telecommunications? Exceptional. A 45 in insurance? Closer to the industry average. The number only means something when you know what you're comparing it to.
For more calculation scenarios and advanced methods, you can also understand how to calculate NPS
What Your NPS Score Tells You (and What It Doesn't)
Here's a rough map of where different scores tend to sit:
-
Below 0 means you have more detractors than promoters. That's a negative NPS score, and it warrants real investigation. Not a fire drill, but genuine inquiry into what's going wrong. Something in the customer experience is consistently failing.
-
0 to 30 is positive territory, but there's room to grow. This is common for companies early in their CX journey, or in industries where NPS averages naturally run lower.
-
30 to 50 is solid ground. Meaningfully more promoters than detractors, and the gap is wide enough that you can trust the signal.
-
50 to 70 is excellent. Your customer base is genuinely loyal. Scores here typically come with tangible benefits: stronger customer retention, lower churn rates, and the kind of word-of-mouth referral behavior that actually shows up in your acquisition numbers.
-
70+ is world-class. And rare. This is the territory of companies where customers don't just like the product. They identify with it.
But none of those ranges tell the full story on their own. The score alone doesn't tell you what to fix. NPS is a directional signal, not a diagnostic tool. It tells you whether customers are loyal. Not why. A 45 from internet service providers means something completely different from a 45 in SaaS. And an aggregate NPS score across your whole company can mask massive variation between products, segments, or regions.
That's where segmentation changes the picture entirely. A company-wide score of 42 might look stable quarter over quarter. But break it down by product line and you might find one product sitting at 65 while another drags at 12. Touchpoint-level NPS data shows you which interactions create promoters and which ones create detractors. That's where you actually find what needs fixing. Customer segmentation with NPS surveys can absolutely come in handy to do that.
And context always matters. NPS benchmarks vary widely by industry. A score that looks modest in absolute terms might still mean you're outperforming every competitor in your market. Tracking your own score over time, and comparing it against industry-specific data, will always tell you more than chasing someone else's number.
What NPS Gets Right, and Where It Falls Short
Most content about net promoter score reads like either a sales pitch for the metric or a takedown piece. It's either "the one number you need to grow" or "an overhyped vanity metric that doesn't predict anything." The truth, as usual, sits somewhere more interesting than either extreme.
What it gets right
It predicts retention better than satisfaction surveys. CSAT tells you if someone was happy with a specific interaction. NPS tells you whether they're likely to stay, come back, and bring others with them. Those are different questions with very different operational value. Bain & Company's research found that NPS explains roughly 20% to 60% of the variation in organic growth rates among competitors. Companies with high NPS consistently see stronger customer retention and lower churn. The relationship between loyalty and revenue is real, and it holds up across industries.
It's benchmarkable. This one's underrated. Unlike whatever custom satisfaction scale your team built internally, NPS uses a standardized question and NPS methodology. You can compare your score against NPS benchmarks, industry averages, and direct competitors. Try doing that with a homegrown survey. You can't.
It's simple enough that teams actually use it. One question. One score. Deployable via email, in-app, SMS, or web. The simplicity isn't a weakness. It's the entire point. Complex forty-question satisfaction surveys sound great in planning meetings. They almost never get shipped. NPS surveys do. And response rates hold up well enough to produce directionally useful NPS data without surveying your entire customer base.
The follow-up question is where the real signal lives. The net promoter score works best as a two-part system. The score tells you where you stand. The open-ended follow-up ("What's the primary reason for your score?") tells you what to do about it. If you're only looking at the number on a dashboard and not reading the comments, you're leaving the most valuable part of every NPS survey on the table.
Where it falls short
A number without context is just a number. NPS tells you whether customers are loyal. It doesn't tell you why. Without follow up questions and qualitative analysis, the net promoter score becomes something you glance at on a dashboard and forget about by lunch. You can track progress over time. But tracking a line on a chart isn't the same as understanding what's moving it.
Cultural scoring bias is a real problem, and most teams ignore it. A "7" in Japan doesn't carry the same meaning as a "7" in the United States. A "6" from one demographic might signal something entirely different than a "6" from another. NPS assumes the 0-10 scale means the same thing everywhere. It doesn't. If you're running surveys across geographies or diverse customer segments, you need to interpret scores with that built into your thinking. → Full treatment of NPS limitations
It gets gamed. Often. The moment companies tie bonuses or performance reviews to NPS results, "score begging" shows up right behind it. Cherry-picked survey timing. Leading questions. Surveys that mysteriously only reach customers who had good experiences. The number climbs. The actual customer experience stays exactly where it was. The net promoter system works best when it drives improvement, not someone's quarterly bonus.
It's one metric, not the whole picture. NPS captures loyalty. A customer satisfaction score (CSAT) captures how someone felt about a specific interaction. Customer effort score (CES) captures how easy it was to get something done. You need at least two of these to see your customer journey clearly. Any team running NPS in isolation is working with about a third of the information they need. You can read more about key customer satisfaction metrics to track.
When Should You Start Measuring NPS?
Not every company is ready. And here's the uncomfortable truth: measuring NPS before you're ready to act on what you learn is worse than not measuring at all. It creates expectations you're not meeting, and it teaches your customer service team that surveys are performance art, not performance data.
-
You're ready when you have repeat customers or ongoing customer relationships. NPS measures loyalty, and loyalty only develops when existing customers have had enough interactions to actually form an opinion. If you're pre-launch or primarily selling one-time products to new customers, NPS won't give you useful signal yet. You need a customer base that has a relationship with you, not just a transaction.
-
You're ready when you've hit product-market fit, or you're close to it. Before that point, your company's products are still changing in fundamental ways. NPS feedback reflects a moving target. After product-market fit, NPS becomes one of the clearest signals of whether your growth engine is actually working or just burning through customers.
-
You're ready when someone on your team will actually do something with the results. Collecting customer feedback without a plan to close the feedback loop isn't just pointless. It's corrosive. Customers who take time to share their thoughts and hear nothing back don't fill out your next survey. They just leave. Quietly.
One thing worth clearing up: NPS isn't only for enterprises. The net promoter system works just as well for a 20-person startup as it does for a Fortune 500 company. What scales is the program you build around it. The core question and the loyalty signal it produces? Those work at any size.
-
You're not ready when you have fewer than 20-30 active customers (the sample's too thin for a meaningful signal), your product is still in beta and changing weekly, or there's simply no process in place to follow up on what customers tell you.
You'll also need to choose between relational and transactional NPS surveys. They serve different purposes at different moments in the customer journey, and picking the wrong one dilutes the data you get back.
The measurement cycle itself is pretty straightforward: send the question at a relevant touchpoint, collect enough survey responses for a meaningful sample (50-100 minimum), calculate NPS, interpret it against your NPS benchmarks, and act on what you learn. Then do it again. Quarterly for relationship NPS, continuously for transactional NPS surveys. You can read more about how often to send NPS surveys for most effective results.
So You Have Your NPS. Now What?
NPS isn't complicated. One question, three categories, one score. The hard part was never understanding what it is.
The hard part is what comes after. Reading the comments, not just the number. Following up with the customer who gave you a 3, not just flagging them in a report. Figuring out why your passives aren't converting into promoters instead of assuming they're fine because they're not complaining. The companies that get the most out of NPS aren't the ones with the highest scores. They're the ones where the score actually changes how people work. That's the difference between measuring loyalty and building it.