TL;DR
- The NPS question is the same, but it measures different things in B2B versus B2C contexts.
- B2B scores average 25-40 (professional recommendation risk, multi-stakeholder complexity). B2C scores average 40-50 (personal satisfaction, emotional connection).
- B2B NPS requires account-level aggregation across stakeholders. B2C NPS is individual-level scoring.
- Never benchmark B2B against B2C or combine scores if you run both. The methodology differences make comparison meaningless.
The same NPS question works in both B2B and B2C. "How likely are you to recommend us?" gets asked either way. But everything else about how you run the program is different.
B2B NPS measures account health across multiple stakeholders and long relationships. B2C NPS measures individual sentiment at transactional moments. The scores aren't comparable. The programs can't be run the same way. And the actions you take based on results are fundamentally different.
This isn't about B2B versus B2C business models in general. It's about how the Net Promoter Score methodology works differently when you apply it to organizational buyers versus individual consumers. The recommendation question measures different things. The scoring works differently when you have multiple stakeholders. The follow-up actions require different workflows.
Here's how NPS programs differ between B2B and B2C, and what that means for your survey design, scoring, analysis, and action.
The Core Difference: What You're Actually Measuring
B2B and B2C NPS programs use the same question but measure fundamentally different things. Understanding this is critical to interpreting your scores correctly.
1. Account-Level vs Individual-Level NPS
In B2B, you're measuring the health of an account, not a person. When you ask "How likely are you to recommend us?", you're not just asking one customer. You're asking multiple stakeholders inside every customer relationship: users, admins, decision-makers, executives. Each one has a different experience with your product or service.
This creates an NPS aggregation problem that doesn't exist in B2C. A single detractor might not represent account risk if five other stakeholders are promoters. But which scores matter more? The executive sponsor who signs the renewal, or the end-users who actually use the product daily? Most B2B companies solve this by tracking both: individual NPS by role, plus a weighted account-level health score.
In B2C, you're measuring the sentiment of an individual customer. One person equals one NPS response. There's no aggregation needed, no stakeholder weighting, no account-level rollup. The person who bought the product is the person who uses it is the person who recommends it or doesn't. The NPS calculation is straightforward: count promoters, count detractors, subtract, and you're done.
2. What "Likelihood to Recommend" Measures in Each Context
The NPS question asks about recommendation likelihood, but what drives that likelihood is different in B2B versus B2C.
In B2B, "likelihood to recommend" reflects organizational trust and professional risk assessment. A procurement manager scoring you a 9 is saying "I would stake my professional reputation on recommending this vendor to my peers." That's not casual. The recommendation happens in a professional context (RFP processes, vendor evaluations, peer network conversations), and a bad recommendation reflects poorly on the person who made it.
Multiple factors beyond product satisfaction drive the B2B NPS score: Did implementation go smoothly? Is support responsive when escalations happen? Are contract terms fair? Will the vendor still exist in three years? Is the product roadmap aligned with our business direction? A user might love the product but score it a 6 because procurement was a nightmare.
In B2C, "likelihood to recommend" reflects personal satisfaction and social identity. A consumer scoring you a 9 is saying "This product made me happy and I'd tell my friends about it." The recommendation happens casually (dinner conversation, social media posts, online reviews), and the social cost of a bad recommendation is low. If you tell a friend to try a new restaurant and they don't like it, nobody gets fired.
How Decision-Making Complexity Affects NPS Scores
B2B purchase decisions involve committees, multiple approvals, and long evaluation cycles. This committee structure directly affects NPS scoring in ways that don't exist in B2C.
Take Salesforce as an example. The sales rep using it daily might score it a 9 (promoter). The admin managing user permissions might score it a 7 (passive) because setup was complicated. The CFO looking at the invoice might score it a 4 (detractor) because of unexpected costs. The VP of Sales who sees the pipeline impact might score it a 10 (promoter). One account, four different NPS responses, and all four matter to the renewal decision.
Which score represents the "true" account NPS? They all do. The challenge in B2B is aggregating these scores into a meaningful account health metric. Most companies weight by influence: the CFO's score carries more weight than an end-user's score because the CFO controls the renewal budget.
Compare that to Netflix. One person decides to subscribe. One person watches. One person scores the NPS survey. The household might have multiple viewers, but the account relationship is still individual-level. There's no procurement committee for Netflix. The NPS score directly reflects one person's satisfaction with the product.
This structural difference is why B2B NPS programs need stakeholder segmentation and B2C programs don't. You can't just average all B2B responses and call it your NPS score. You have to understand who's answering and what their answer means for account health.
What this means for your NPS program: B2B programs need multi-stakeholder survey logic, role-based scoring, and weighted account-level aggregation. B2C programs need volume management, fast transactional surveys, and individual-level action triggers. Trying to run a B2C-style NPS program in a B2B context creates noise. You're measuring too granularly and missing account health signals. Trying to run a B2B-style program in B2C misses velocity. You're moving too slowly and losing the transactional signal that matters.
Benchmark Differences: What "Good" Means in Each Model
The numbers aren't comparable. Here's why, and what to use instead.
Industry Benchmark Overview
Quick reference:
| Average NPS | "Good" Threshold | Top Performers | |
| B2C | 40-50 | 50+ | 60-70+ |
| B2B | 25-40 | 40+ | 50-60+ |
Real company examples make this concrete.
-
B2C top performers: Apple consistently scores 70+. Tesla runs in the high 90s (among owners, not the general public). Netflix sits around 68. USAA scores in the mid-70s. These are brands with strong emotional connections and loyal customer bases.
-
B2C average performers: Amazon scores around 54. Costco runs in the mid-70s. Target sits around 45. These are solid retail experiences without the cult-brand premium.
-
B2B SaaS leaders: Salesforce typically reports in the 40-50 range depending on customer segment. HubSpot runs in the mid-50s. Zoom (enterprise accounts) scores around 45-50. These are considered strong scores in the B2B context.
-
B2B enterprise software: SAP historically scores in the 20s and 30s. Oracle tends toward the low 30s. ServiceNow runs in the 40s. The complexity of enterprise implementations keeps scores structurally lower than consumer products.
B2B scores are structurally lower for reasons that have nothing to do with product quality or customer experience competence.
-
Relationship complexity: More stakeholders means more friction points. Getting universal promoters across an entire account is harder than getting one happy consumer.
-
Organizational constraints: Even happy B2B customers face procurement hurdles, IT approval processes, compliance requirements. All of that affects "likelihood to recommend" in ways that have nothing to do with whether your product actually works.
-
Higher stakes: B2B purchases involve risk, committees, and long commitments. The recommendation threshold is higher. Recommending a B2B vendor means putting your professional reputation behind it.
-
Professional recommendation norms: B2B buyers don't casually recommend vendors the way consumers recommend restaurants. The bar is higher.
B2C scores are structurally higher for equally structural reasons.
-
Emotional purchases: Many B2C products tap into identity and emotion. Apple doesn't just sell phones. Nike doesn't just sell shoes. Peloton doesn't just sell bikes. People buy into the brand identity. When your purchase reflects who you are or who you want to be, you're more likely to be a promoter.
-
Low-risk recommendations: Recommending a consumer product doesn't put your job on the line. The social cost of a bad recommendation is low. If you tell a friend to try a new restaurant and they don't like it, nobody gets fired. If you recommend enterprise software to your procurement team and the implementation fails, that's a career risk.
-
Social proof dynamics: B2C recommendations happen casually, frequently, on social media. "Just tried this new coffee brand, it's amazing." The friction is low. B2B recommendations are formal. "We should evaluate this vendor for our tech stack." The process is heavy.
How to Benchmark Correctly?
For B2B, compare within your segment. SMB, mid-market, and enterprise each have different score ranges. A 35 in enterprise software is solid. A 35 in SMB SaaS might signal trouble.
-
Compare within your industry vertical. B2B SaaS doesn't equal B2B manufacturing. The relationship dynamics are different.
-
Track directional trends, not absolute scores. Moving from 32 to 38 over two quarters matters more than hitting some arbitrary 50 benchmark you pulled from a consumer brand.
-
Focus on account-level NPS cohorts. Renewal risk cohort, expansion opportunity cohort, at-risk cohort. The score distribution across your book of business tells you more than the overall average.
For B2C, compare within your product category. Retail doesn't equal subscription doesn't equal marketplace doesn't equal service. Each has different loyalty dynamics.
-
Compare against brand type. Luxury brands run higher NPS than value brands, even when both deliver good experiences. That's positioning, not performance.
-
Monitor competitive position. Your score relative to direct competitors matters more than hitting an industry average pulled from companies that don't compete with you.
-
Track segment differences. Demographic, geographic, purchase channel. A 45 overall might hide a 60 in one segment and a 30 in another. The mix matters.
The trap: don't benchmark your B2B NPS against B2C companies. A 35 in B2B software is strong. A 35 in B2C e-commerce is concerning. The structural differences make cross-model comparisons meaningless. For more on what is a good net promoter score across different contexts, we break down benchmarks by industry and business model.
How the NPS Question Works Differently in B2B vs B2C
The same question measures different things depending on who's answering it.
What "Likelihood to Recommend" Actually Means
The NPS question is "How likely are you to recommend us to a colleague or friend?" But what a B2B buyer and a B2C consumer hear when you ask that question is completely different.
In B2B, the recommendation question measures professional risk tolerance. When a procurement manager scores you a 9, they're saying "I would stake my professional reputation on recommending this vendor to my peers." That's not casual. A bad B2B vendor recommendation reflects poorly on the person who made it. The stakes are real.
A B2B "would you recommend" isn't just about satisfaction. It's about whether the product delivers business value, whether implementation went smoothly, whether support is responsive, whether the contract terms are fair, and whether the vendor will still be around in three years. All of that factors into the score.
In B2C, the recommendation question measures personal satisfaction and social identity. When a consumer scores you a 9, they're saying "This product made me happy and I'd tell my friends about it." The social cost of a bad restaurant recommendation is low. The emotional bar is lower.
A B2C "would you recommend" is often about how the product makes someone feel. Apple customers don't just recommend iPhones because they work well. They recommend them because owning one aligns with their self-image. The recommendation is partly identity-driven.
Calculating Account-Level NPS in B2B
B2C NPS calculation is straightforward. One customer, one score. Subtract detractors from promoters, divide by total respondents, multiply by 100. Done.
B2B NPS calculation has a stakeholder problem. You have five people in one account. Two promoters (9s), two passives (7s and 8s), one detractor (4). Do you:
- Average all five scores into one account-level score?
- Weight by role (decision-maker votes count more than end-users)?
- Weight by seat count (if the detractor represents 50 users, does that matter)?
- Track individual scores separately and look for patterns?
The answer depends on what you're trying to measure. If you're measuring renewal risk, decision-maker NPS matters more than end-user NPS. If you're measuring product adoption, end-user NPS matters more. Most B2B companies do both: track individual scores by role, then roll up to an account health metric weighted by influence.
For instance, Salesforce account with an executive sponsor (9), three admins (8, 7, 6), and twenty end-users (average 7). The account-level NPS might be calculated as: executive sponsor = 50% weight, admins = 30% weight, end-users = 20% weight. Final account score reflects the reality that the executive sponsor drives renewal decisions.
What a "9" Means in Each Context
A 9 from a B2B buyer and a 9 from a B2C consumer look the same numerically, but they represent different levels of enthusiasm.
B2B 9: "This vendor solves our problem reliably. I would recommend them in a professional context if asked, though I'd caveat it with 'depends on your use case.'" Reserved, conditional, professional.
B2C 9: "I love this brand. I actively tell people about it. I'd recommend it unprompted." Emotional, unconditional, personal.
This is why B2B scores run lower. The psychological bar for giving a 10 in a professional context is higher than in a consumer context. A B2B buyer giving a 10 is rare because professional recommendations carry professional consequences. A B2C consumer giving a 10 is common because the emotional connection is real and the stakes are low.
How Response Rates Differ in B2B vs B2C
B2B NPS surveys typically get 15-25% response rates. B2C surveys can hit 35-55% on SMS or in-app, but drop to 10-18% for email.
Why B2B is lower: Survey fatigue (business buyers get surveyed constantly), time constraints (answering during work hours competes with actual work), and skepticism (B2B buyers don't always trust that their feedback will matter).
Why B2C is higher for some channels: Immediacy (transactional surveys sent right after purchase catch people while they're still engaged), simplicity (one question is easy), and emotional recency (B2C purchases often create immediate emotional reactions worth sharing).
The methodology difference: B2B NPS programs accept lower response rates because each response represents a high-value account. Ten responses from Enterprise accounts is more actionable than 100 responses from random consumers. B2C programs need higher volume because individual responses carry less weight. For more on improving response rates, see our guide on nps survey response rates.
Survey Design Differences: Who, When, What, Where
B2B surveys all stakeholders quarterly via email. B2C surveys individuals transactionally via SMS or in-app. Same NPS question, but who you ask, when you ask, which channels you use, and what follow-up questions matter all change based on your business model.
1. Who Do You Survey?
B2B has a stakeholder problem. Multiple people per account, each with different experiences and different levels of influence over the renewal decision.
Survey all roles, but weight them differently. Users measure product satisfaction and feature adoption. Admins measure ease of management and support quality. Decision-makers measure business value and ROI perception. Executives measure strategic partnership strength.
You need account-level aggregation. Roll up individual responses into an account health score. But also track NPS by role. Identify champion versus detractor patterns. An account where the decision-maker is a 9 but all the users are 5s looks healthy on paper, but you're one leadership change away from churn.
B2C has a volume problem. High volume, low individual value except for high-ticket purchases. You can't survey everyone without creating survey fatigue, and you don't need to.
Survey a representative sample or a trigger-based subset. Sample active customers monthly or quarterly for relationship transactional nps. Survey every customer post-transaction for transactional NPS, if your volume allows it. Use stratified sampling for high-volume businesses. Survey across segments to get representation without overwhelming anyone.
No aggregation needed. One person equals one score.
2. When Do You Survey?
B2B timing is slow. Relationships are slow-moving. Quarterly or bi-annual relationship NPS works. Milestone-based transactional NPS works for key moments: post-onboarding, post-renewal, post-major release, post-support escalation.
Frequency cap matters. 90 to 180-day suppression between surveys. Don't oversample accounts. You're not measuring transactions, you're measuring relationships. Relationships don't change weekly.
Timing consideration: align with business cycles. Don't survey during quarter-end chaos when your customers are buried. Post-renewal is ideal. The contract just renewed, the relationship is stable, and you're measuring satisfaction at a natural checkpoint.
B2C timing is fast. Monthly to quarterly for relationship NPS, depending on purchase frequency. Transactional NPS goes out immediately post-purchase or post-interaction. Within 24 to 48 hours, while the experience is fresh.
Frequency cap is shorter. 30 to 60 days. Faster turnover is acceptable because transaction volume is higher.
Timing consideration: survey when the experience is fresh, not when the customer is busy. An NPS survey on a mobile checkout screen right after purchase works. An NPS survey three days later when they've moved on doesn't.
3. What Do You Ask?
The core NPS question is the same. "How likely are you to recommend [company] to a colleague or peer?" for B2B. "How likely are you to recommend [brand] to a friend or family member?" for B2C.
B2B follow-up questions:
- "What's the primary reason for your score?" (open-text, required)
- "Which aspect of working with us matters most?" (business value, product, support, partnership)
- "Who else in your organization should we be talking to?" (stakeholder mapping)
- "What would make you a stronger advocate?" (expansion signal, upsell signal)
Keep it short. Three to five questions max. B2B respondents are time-constrained. They're answering during work hours, between meetings. Respect that.
B2C follow-up questions:
- "What's the main reason for your score?" (open-text)
- "What did we do well?" or "What could we improve?" (split based on score)
- Optionally: product-specific questions, feature satisfaction
Keep it very short. One to two questions max. B2C respondents drop off fast. Every additional question cuts your completion rate. For more tactical guidance on nps survey question design, we cover question banks organized by customer journey stage and business model.
4. Where and How Do You Distribute?
B2B channels: email is primary. It's professional, expected, and allows for longer surveys. In-app is secondary. Use it for product users, post-milestone or as a periodic check-in. Account manager outreach is for high-touch. Strategic accounts get white-glove follow-up.
Avoid SMS. Too casual for B2B. Avoid social media. Doesn't map to accounts, and most B2B buyers don't want to engage with vendors publicly.
B2C channels are everywhere. Email works for established customers and subscription businesses. SMS gets high engagement for mobile-first businesses and transactional moments. In-app is ideal for mobile apps and SaaS products with consumer users. Website works post-purchase or post-support, either as exit intent or embedded.
Use multiple channels. B2C customers are everywhere. Meet them where they are. For detailed channel strategies, see our guide on how when and where to collect net promoter score surveys.
Decision framework: B2B uses email first, in-app second. Add account manager outreach for the top 20% of accounts by revenue. B2C matches the channel to the transaction. E-commerce gets email or SMS. Mobile apps get in-app. Physical retail gets SMS or QR codes.
Program Strategy Differences: Goals, Actions, Teams
B2B optimizes for account retention and expansion. B2C optimizes for customer lifetime value and viral growth. The goals are different, so detractor recovery workflows, promoter activation strategies, and team ownership structures need to be different too.
a. Program Goals
B2B's primary goal is account retention and expansion. NPS is a leading indicator for churn risk and upsell opportunity. Detractor accounts need immediate executive intervention. Promoter accounts are expansion candidates.
Secondary goal: reference generation and case studies. Promoters become reference customers, testimonials, co-marketing partners. B2B buying involves validation. Promoters de-risk sales cycles for your pipeline.
Measurement horizon: quarterly to annual. These are slow-moving relationships.
B2C's primary goal is customer lifetime value and repeat purchase. NPS predicts repurchase likelihood, not just satisfaction. Detractors churn faster. Promoters have two to three times higher LTV.
Secondary goal: word-of-mouth and organic growth. Promoters drive referrals, social proof, reviews. B2C growth is often viral. NPS tracks referral potential.
Measurement horizon: monthly to quarterly. These are fast-moving transactions.
b. Action Triggers and Response Strategy
B2B detractor response: immediate account manager or CS owner notification. Executive escalation for strategic accounts. Root cause analysis required, not just "sorry you're unhappy." Recovery window is five to seven business days. B2B accounts are patient, but they expect follow-through.
B2B passive response: CS check-in, not emergency escalation. Identify expansion blockers or competitive threats. This is relationship building, not issue resolution.
B2B promoter response: request case study, testimonial, referral. Invite to advisory board or beta programs. Nurture into champions and advocates.
HubSpot does this well. Promoter accounts get invited to speak at INBOUND (their annual conference), join the customer advisory board, or participate in beta programs for new features. The promoter becomes a reference customer for sales, a case study for marketing, and a feedback source for product. One high-value relationship, multiple business outcomes. More on activating nps promoters for growth.
B2C detractor response: automated apology plus resolution offer. Discount, refund, support escalation. Speed matters. Respond within 24 to 48 hours. B2C customers expect fast action. Route to support team, not executive team.
B2C passive response: automated "how can we improve" follow-up. Personalized re-engagement offers. Low-touch intervention. High volume makes manual outreach impractical.
B2C promoter response: request review on Google, Yelp, Trustpilot, Amazon. Referral incentive program. Give $20, get $20. Social sharing prompts for Instagram, Facebook.
Dropbox built its growth engine on B2C promoter activation. Promoters got extra storage for referring friends. Simple, automated, viral. No human involvement needed. The promoter activation scaled to millions of users because it was fully automated and incentivized.
For detailed playbooks on handling each segment, see our guides on nps detractors and converting passives.
c. Team Ownership
B2B primary owner: Customer Success or Account Management. Support roles include Product for product-driven detractors, Sales for new account benchmarking, Leadership for strategic account escalations. Cadence: quarterly business reviews include NPS trends by account cohort.
B2C primary owner: CX or Support team. Support roles include Marketing for promoter activation, Product for product-driven detractors, Operations for process-driven issues. Cadence: weekly or monthly dashboards track volume trends and segment shifts.
Analysis and Reporting Differences
B2B analysis requires account-level rollup with stakeholder weighting and champion tracking. B2C analysis requires volume monitoring and week-over-week trend detection. B2B reports account health to quarterly business reviews. B2C reports operational issues to weekly dashboards. Different metrics, different cadences, different audiences.
B2B Analysis Focus
Account-level rollup is critical. Don't just average all responses. Weight by account revenue, contract size, or strategic value. Track NPS by account tier: Enterprise, Mid-Market, SMB. Monitor account-level trends over time. Is this account getting healthier or riskier?
Stakeholder segmentation matters. Break down scores by role. If your executive sponsor is a 9 but all the end users are 6s, what does that tell you? Identify accounts where decision-maker NPS is high but user NPS is low. That's renewal risk.
Track champion presence. Do you have at least one promoter in each strategic account? Accounts without champions churn at higher rates.
Cohort analysis: new customers (zero to 90 days post-onboarding) versus mature customers (one-plus years). Accounts with recent support escalations versus no recent issues. Renewal upcoming in the next 90 days versus mid-contract.
B2C Analysis Focus
Volume and velocity drive B2C analysis. Track score distribution shifts week-over-week. A sudden detractor spike signals an operational issue: shipping delays, website bugs, customer service breakdown.
Monitor response rates by channel and segment. High-volume businesses need automated alerting, not manual review.
Segment breakdowns: demographics (age, location, gender if relevant), purchase behavior (first-time buyers versus repeat customers, high spenders versus low spenders), channel (online versus in-store, mobile versus desktop).
Product and feature correlation. Which products or categories drive promoters versus detractors? For SaaS, which features do promoters use that passives don't? For more on analyzing NPS data at scale, see our guide on nps data analysis and reporting.
Reporting to Leadership
B2B reporting focuses on account health. Metric: overall NPS plus account-tier NPS plus detractor count in strategic accounts. Narrative: "We have three detractor accounts in Enterprise tier. Here's the recovery plan."
Frequency: quarterly business reviews, monthly CS dashboards. Action focus: account retention, expansion pipeline, reference customer health.
B2C reporting focuses on operational trends. Metric: overall NPS plus segment trends plus detractor volume plus response rate. Narrative: "NPS dropped five points this month due to shipping delays. Here's what we're fixing."
Frequency: weekly or monthly dashboards. Action focus: operational improvements, product fixes, marketing adjustments.
When to Use Both (and How to Keep Them Separate)
Some businesses operate in both models. Here's how to handle it.
Common Scenarios
SaaS with both enterprise and self-serve tiers: Enterprise customers get a B2B program (account-level, CS-owned, quarterly). Self-serve SMB or individual users get a B2C program (user-level, automated, transactional). Don't combine scores. Track separately, report separately, action separately. For SaaS-specific guidance, see our guide on saas nps surveys.
Platforms serving businesses and consumers: Stripe is an example. Businesses use it, but consumers experience the checkout. Shopify is another. Merchants are B2B, but their customers are B2C. Survey both sides. Merchant NPS runs as a B2B program. Shopper NPS runs as a B2C program.
B2B companies with consumer-like products: Slack is an example. IT buys it, but everyone uses it like a consumer app. Zoom is similar. Corporate accounts, but individual user experience drives adoption.
Hybrid approach: account-level NPS for decision-makers (B2B), user-level NPS for daily users (B2C-style). Cross-reference both for account health.
The Cardinal Rule
Never combine B2B and B2C NPS into one score.
Why this breaks everything: Benchmarks are meaningless. You're averaging apples and oranges. Action triggers fire incorrectly. B2B detractor doesn't equal B2C detractor. Leadership can't act on the data. What does a blended 42 NPS tell you? Nothing useful.
How to keep them separate:
- Different survey programs (different send logic, different follow-up workflows)
- Different dashboards (different KPIs, different owners)
- Different response protocols (CS handles B2B, Support handles B2C)
- Report both to leadership with context ("Enterprise NPS: 41, Self-Serve NPS: 53 — here's what each means")
How Real Companies Run B2B vs B2C Programs
Seeing how actual companies structure their NPS programs makes the differences concrete.
B2B Examples
-
Adobe (Creative Cloud for Enterprise): Surveys stakeholders quarterly. Product users get in-app surveys about feature satisfaction. IT admins get surveys about deployment and management. Decision-makers (creative directors, VPs) get relationship NPS focused on business value. All three scores roll up to an account health metric tracked by the customer success team. Follow-up is role-specific. A detractor end-user gets product training resources. A detractor IT admin triggers a technical account review. A detractor decision-maker gets an executive business review within two weeks.
-
Zendesk: Runs milestone-based NPS for B2B customers. Post-onboarding at 60 days, annual relationship check-ins, post-renewal. Surveys go to admins and power users, not every agent using the tool. Account managers own the follow-up for any account with a detractor decision-maker. Promoter accounts get invited to speak at Zendesk events or participate in case studies.
-
Slack (Enterprise Grid): Separates user experience NPS from account health NPS. Daily users get quick in-app surveys about product satisfaction (B2C-style frequency, B2C-style questions). IT buyers and workspace owners get quarterly account health surveys (B2B-style). Customer success teams monitor both, but renewal risk is driven more by decision-maker NPS than end-user NPS.
B2C Examples
-
Amazon: Runs transactional NPS post-purchase via email. One question: "How likely are you to recommend Amazon to a friend?" Detractors trigger an automated "We're sorry, here's a $5 credit" response within 24 hours. Promoters occasionally get asked to leave a product review. No human touches the feedback loop until detractor volume spikes in a specific category, which signals a product or logistics issue.
-
Airbnb: Surveys both hosts (B2B-adjacent) and guests (B2C). Guest NPS is transactional, sent immediately post-stay. Host NPS is relational, sent quarterly. The programs are completely separate. Guest detractors get customer support outreach. Host detractors get account manager calls (because losing a host means losing inventory). Different stakes, different responses.
-
Spotify: Runs periodic relationship NPS via in-app survey. "How likely are you to recommend Spotify to a friend?" Appears once every 90 days for active users. Detractors get a follow-up question: "What's the main reason?" Responses feed into product roadmap priorities. Promoters get prompted to share their Wrapped summary on social media (turns NPS into marketing activation).
What These Examples Show
B2B programs are slow, high-touch, and relationship-focused. Surveys go out quarterly or at major milestones. Human beings follow up with detractors. Recovery windows are days to weeks. The goal is account retention and expansion.
B2C programs are fast, automated, and volume-driven. Surveys go out transactionally or monthly. Automated workflows handle most responses. Recovery windows are hours to days. The goal is repeat purchase and word-of-mouth growth.
Hybrid companies (Slack, Airbnb) run both programs in parallel but keep them completely separate. Different surveys, different owners, different action triggers.
Final Thought
B2B and B2C NPS programs use the same question but measure fundamentally different things. The "likelihood to recommend" question taps into professional risk assessment in B2B and personal satisfaction in B2C. The scores aren't comparable because the psychological bar for giving a 9 or 10 is different in each context.
How you calculate NPS differs. B2B requires stakeholder weighting and account-level aggregation. B2C is straightforward individual scoring. How you run surveys differs. B2B needs quarterly relationship checks with multi-role segmentation. B2C needs transactional velocity with high-volume automation. How you act on scores differs. B2B requires high-touch account manager follow-up. B2C requires automated workflows that scale.
If you're running a hybrid business, keep the programs completely separate. Different surveys, different scoring logic, different dashboards, different action triggers, different owners. Track them independently, report them with context, and optimize each program for what it's actually measuring. Never blend B2B and B2C NPS into one score. The methodology differences make that comparison meaningless.
For a complete overview of how NPS works across all business models and industries, see our net promoter score guide.