Net Promoter Score (NPS)

โ† Back to Glossary

What is Net Promoter Score (NPS)?

Net Promoter Score is a customer loyalty metric that measures how likely customers are to recommend your product or service to others. Developed by Fred Reichheld and Bain & Company in 2003, NPS uses a single survey question scored on a 0-10 scale to categorize customers as promoters, passives, or detractors. The resulting score ranges from -100 to +100 and serves as a benchmark for customer loyalty and satisfaction over time.

NPS has become one of the most widely adopted metrics in B2B SaaS. Its appeal is simplicity: one question, one number, easy to track quarter over quarter. That simplicity is both its greatest strength and its biggest limitation.

For CS teams, NPS works best as one signal in a broader set of health indicators. It tells you how a customer feels about you at a moment in time. It doesn't tell you why they feel that way, what they're going to do about it, or whether the person responding is even the one making the renewal decision. Treating NPS as the whole story instead of one chapter is where teams run into trouble.

TL;DR โ€“ What you need to know

  • NPS measures loyalty by asking customers how likely they are to recommend you on a 0-10 scale
  • The formula is simple: subtract the percentage of detractors (0-6) from the percentage of promoters (9-10)
  • Average SaaS NPS is 36, with scores above 50 considered excellent and above 80 world-class
  • Buyers score higher than users on NPS surveys, with a median gap of 10 points, which means who you survey changes your number
  • NPS is a trailing indicator. By the time a customer rates you a 3, they've been unhappy for months

How to calculate Net Promoter Score

The NPS formula has three steps.

First, ask customers: "On a scale of 0 to 10, how likely are you to recommend [product] to a friend or colleague?" Then group responses into three categories. Promoters (9-10) are loyal enthusiasts who will renew and refer. Passives (7-8) are satisfied but unenthusiastic, vulnerable to competitor offers. Detractors (0-6) are unhappy customers who may churn or discourage others from buying.

Then calculate:

NPS = % Promoters - % Detractors

Passives don't factor into the math. They count toward total responses but not the final score.

A quick example: you survey 200 customers. 110 respond as promoters (55%), 50 as passives (25%), and 40 as detractors (20%). Your NPS is 55% minus 20%, which gives you an NPS of 35.

The scale runs from -100 (every respondent is a detractor) to +100 (every respondent is a promoter). In practice, neither extreme happens. What matters is whether your score trends upward over time, not whether it hits an arbitrary benchmark.

NPS benchmarks for B2B SaaS

Bain & Company, the creators of NPS, define the scoring tiers. Anything above 0 is good (you have more promoters than detractors). Above 50 is excellent. Above 80 is world-class.

For SaaS specifically, the benchmarks are tighter. CustomerGauge's B2B NPS research puts the average SaaS NPS at 36, with successful B2B companies typically scoring between 39 and 76. That same research found NPS remains the most trusted metric in B2B at 41%, ahead of CSAT at 26% and CES at 11%.

Those averages hide important variation. Enterprise accounts tend to produce different scores than SMB. Decision-makers who approved the purchase often score higher than the end users who interact with the product daily. Gainsight's CS Index found a median NPS gap of 10 points between buyers (46) and users (36) at the same companies.

That gap matters for CS teams. If you're only surveying the executive sponsor, your NPS looks healthier than the actual adoption picture. If you're only surveying end users, you might miss that the economic buyer is perfectly happy and fully intends to renew. Who you survey shapes the number you get, and both perspectives carry different operational value.

How CS teams use NPS (and where they over-rely on it)

NPS has earned its place in the CS metrics toolkit. It's easy to deploy, easy to track, and it gives you a consistent pulse on customer sentiment across your base. The problems start when teams treat it as more than it is.

NPS as one health score input

Most customer health scores incorporate four to six signals: product usage, support ticket trends, engagement frequency, adoption depth, and NPS or CSAT. NPS typically carries 15-30% of the weight in a composite score. That's appropriate. It captures sentiment, which usage data alone can't measure. But a customer who gives you a 9 and hasn't logged in for three weeks isn't healthy. The health score failures that blindside CS teams at renewal often trace back to over-weighting sentiment metrics while ignoring behavioral ones.

Relational vs. transactional NPS

There are two ways to deploy NPS surveys, and each serves a different purpose.

Relational NPS goes out on a set cadence (quarterly or biannually) and measures overall sentiment toward your company. This is the version that feeds health scores and gives leadership a trend line. It answers: "How does this customer feel about us right now?"

Transactional NPS triggers after a specific interaction: a support ticket resolution, an onboarding milestone, or a training session. This version measures satisfaction with a particular experience. It answers: "How did that interaction go?"

CS teams that run both get a richer picture. The relational score tells you the account's general temperature. The transactional scores tell you which specific touchpoints are raising or lowering that temperature.

When to send NPS surveys in B2B SaaS

Timing affects response quality more than most teams realize. Survey a customer during the first week of onboarding and you'll get a score based on excitement, not experience. Survey them in the middle of a support escalation and you'll capture frustration that may not reflect their overall relationship with you.

Effective timing patterns for SaaS look like this: send relational surveys at least 30 days after onboarding (so customers have enough experience to give a meaningful answer), repeat on a quarterly or biannual cadence, and avoid surveying the same customer more than twice a year. Survey fatigue is real, and low response rates undermine the statistical value of the data you're collecting.

The trailing indicator problem

NPS tells you how a customer feels after an experience has already happened. By the time someone gives you a 3, they've been frustrated for weeks or months. The score confirms the problem. It doesn't predict it.

Leading indicators like declining login frequency, shrinking feature usage, or a spike in support tickets surface risk earlier. When a customer's usage drops 40% over six weeks, your CS team can intervene before that dissatisfaction crystallizes into a detractor score. NPS catches the customers who are already unhappy. Usage data catches the ones who are getting there.

The strongest CS teams use NPS to validate what their leading indicators are already telling them, not as the primary early warning system.

Closing the loop: what separates useful NPS from vanity NPS

Collecting NPS scores without acting on individual responses is one of the most common waste patterns in CS. The score goes into a dashboard. Leadership reviews the trend line quarterly. Nobody calls the detractors.

Acting on detractor responses

Every detractor response should trigger a follow-up. Not an automated email. A real conversation. When a customer rates you a 4 and writes "the reporting doesn't work the way we need it to," that's a churn signal wrapped in feedback. The CSM should reach out within 48 hours, understand the specific issue, and determine whether it's a product gap, a training opportunity, or a deeper relationship problem.

The companies that do this well see detractor scores as the most valuable data their NPS program produces. A promoter telling you they're happy is confirming something you already know. A detractor telling you what's broken is giving you a chance to save the account.

Using promoter signals for growth

Promoters are your customer advocacy pipeline. A customer who scores a 9 or 10 and writes specific praise about your product has told you they're ready for a referral ask, a case study conversation, or a G2 review request. CS leaders at companies like HelloSign have noted that roughly 80% of promoters contribute to expansion revenue, while 80% of detractors voice their complaints publicly.

The missed opportunity is treating promoter responses the same as passive ones. If someone just told you they'd recommend your product, that's a signal to deepen the relationship, not file the response away.

The passive problem

Passives (7-8) get ignored because the formula excludes them. That's a mistake. A passive who scored an 8 is one good interaction away from becoming a promoter and one bad one away from becoming a detractor. They represent the most convertible segment in your customer base, and most teams do nothing with them because the math treats them as invisible.

NPS vs. CSAT vs. CES: which metric for which moment

CS teams don't need to choose between these metrics. They need to deploy each one where it adds the most value.

NPS CSAT CES
Measures Overall loyalty and likelihood to recommend Satisfaction with a specific interaction Effort required to complete a task or resolve an issue
Question "How likely are you to recommend us?" (0-10) "How satisfied were you with this experience?" (1-5) "How easy was it to resolve your issue?" (1-7)
Best used for Quarterly relationship health checks Post-support, post-training, post-onboarding feedback After self-service or complex process completion
Indicator type Trailing (confirms existing sentiment) Real-time (captures immediate reaction) Predictive (high effort predicts churn)
Survey cadence Quarterly or biannually After each relevant interaction After each relevant interaction
CS health score role Sentiment signal (15-30% weight) Interaction quality signal Friction and effort signal
Limitation Doesn't explain why or predict what's next Captures moments, not overall relationship Narrow scope, only measures ease of resolution

These metrics are complements, not competitors. NPS tracks loyalty over time, CSAT captures interaction quality, and CES predicts churn through effort measurement.

NPS measures loyalty over time. Use it for relational health checks on a quarterly cadence. CSAT surveys measure satisfaction with a specific interaction. Use them after support tickets, training sessions, or onboarding milestones. CES (Customer Effort Score) measures how easy it was for a customer to accomplish a task. Use it after self-service interactions or complex processes.

The combination matters more than any single metric. A customer might give you a high NPS (they like the overall relationship) but a low CSAT on their last support interaction (that ticket took too long to resolve). Without both signals, you'd miss the friction that's slowly eroding the goodwill.

Voice of Customer programs that layer NPS, CSAT, and CES together across different touchpoints build the most complete picture of customer experience. The metrics aren't competitors. They're complements.

Frequently asked questions about Net Promoter Score

Q: How do you calculate NPS?

A: Subtract the percentage of detractors (customers who score 0-6) from the percentage of promoters (customers who score 9-10). Passives (7-8) are excluded from the calculation. The result is your NPS, which ranges from -100 to +100.

Q: What is a good NPS score for SaaS?

A: The average SaaS NPS is 36. Bain & Company considers anything above 0 good, above 50 excellent, and above 80 world-class. Compare your score against your specific industry segment rather than generic benchmarks, and track your trend over time.

Q: How often should you send NPS surveys?

A: Send relational NPS surveys quarterly or biannually. Avoid surveying the same customer more than twice a year to prevent survey fatigue. Wait at least 30 days after onboarding before the first survey so customers have enough experience to give a meaningful response.

Q: What is the difference between NPS and CSAT?

A: NPS measures overall loyalty and likelihood to recommend. CSAT measures satisfaction with a specific interaction or experience. NPS is a broader relationship metric sent on a regular cadence. CSAT is a targeted metric triggered after individual touchpoints like support tickets or training sessions.

Q: Should NPS be tied to CSM compensation?

A: If NPS is tied to compensation, it should be companywide rather than CSM-specific. NPS results are influenced by sales, product, marketing, and support, not just CS. Tying it to individual CSMs creates misaligned incentives and encourages survey gaming rather than genuine improvement.

Q: Why do buyers score higher on NPS than end users?

A: Buyers evaluate the strategic decision to purchase. End users evaluate daily experience with the product. Buyers often score 10 points higher because they're further from the friction of daily use. CS teams should survey both groups and track the gap, since a wide buyer-user gap signals adoption problems.

Q: Can NPS predict churn?

A: NPS is a trailing indicator of churn, not a leading one. A low score confirms dissatisfaction that already exists. Usage data, support ticket trends, and engagement frequency predict churn earlier. Use NPS alongside behavioral signals in your health score rather than as a standalone churn predictor.

Conclusion

Net Promoter Score gives CS teams a simple, consistent way to measure customer loyalty, but it works best when treated as one signal among several rather than the definitive measure of account health. The teams that get the most from NPS are the ones who act on individual responses, understand who they're surveying, and pair sentiment data with behavioral signals that surface risk earlier.

Key Takeaways

  • NPS is most valuable as one input in a composite health score (15-30% weight), not as a standalone metric that drives retention strategy
  • Close the loop on every detractor response within 48 hours and use promoter signals to trigger advocacy and expansion conversations
  • Pair NPS with CSAT and CES to build a complete view of customer experience across different touchpoints and moments

What to do in the next 7 days

  1. Check who you're surveying. Pull your last NPS survey distribution list and see whether you're reaching end users, decision-makers, or both. If it skews entirely toward one group, your score may not reflect the full account picture.
  2. Review your last 10 detractor responses. For each one, check whether a CSM followed up with a conversation. If fewer than half got a real follow-up, build a process for routing detractor alerts to the account owner within 48 hours.
  3. Compare your NPS trend against your usage data trend for the same period. If NPS is stable but product usage is declining, you're seeing the trailing indicator problem in action. Flag the accounts where sentiment and behavior are diverging.

Related terms