CSAT Survey

โ† Back to Glossary

What is a CSAT survey?

A CSAT (Customer Satisfaction Score) survey is a feedback method that measures how satisfied a customer is with a specific interaction, product experience, or service touchpoint. It typically asks a single question ("How satisfied were you with [experience]?") on a 1-5 or 1-10 scale, with the result expressed as a percentage of satisfied respondents. CSAT is the most widely used transactional satisfaction metric in B2B SaaS.

What makes CSAT different from other feedback metrics is its specificity. A Net Promoter Score survey asks how the customer feels about your company overall. A CSAT survey asks how they felt about one particular moment: this support ticket, this onboarding call, this training session.

That specificity is CSAT's superpower for CS teams. When your NPS drops, you know something is wrong but not where. When your CSAT drops after onboarding sessions, you know exactly which touchpoint needs attention. It turns vague dissatisfaction into a specific address.

TL;DR โ€“ What you need to know

  • CSAT measures satisfaction with a specific interaction, not overall loyalty or relationship health
  • The formula is straightforward: divide satisfied responses (typically 4-5 on a 5-point scale) by total responses, multiply by 100
  • B2B SaaS CSAT averages are in the high 70s, with top performers reaching 90%+
  • Best deployed after key post-sale moments: onboarding completion, support resolution, training sessions, QBRs
  • Satisfaction with one interaction doesn't guarantee retention. A customer can rate support 5/5 and still churn over an unresolved product gap

How to calculate your CSAT score

The calculation takes three steps.

First, ask customers to rate their satisfaction with a specific experience. The most common format in SaaS is a 1-5 scale where 1 means "very unsatisfied" and 5 means "very satisfied." Some teams use emoji scales, star ratings, or 1-10 ranges. The scale matters less than consistency. Pick one and stick with it so you can track trends.

Then count the satisfied responses. On a 1-5 scale, only responses of 4 ("satisfied") and 5 ("very satisfied") count as positive. This approach comes from research showing that the top two scores are the most reliable predictor of positive customer behavior.

Then calculate:

CSAT = (Number of satisfied responses / Total responses) x 100

A worked example: you send a CSAT survey after 150 support ticket resolutions. 112 customers respond with a 4 or 5. Your CSAT is (112 / 150) x 100 = 74.7%.

One thing to watch: response rates matter as much as scores. A CSAT of 90% from 15 responses tells you less than a CSAT of 78% from 150 responses. In-app surveys typically pull 20-30% response rates, while email surveys land closer to 10-15%. If your response rate is below 10%, the data may not represent your actual customer base.

When CS teams should (and shouldn't) send CSAT surveys

CSAT works best when tied to a specific, recent interaction. The customer needs to remember what you're asking about, and the experience needs to be fresh enough that their response reflects what happened, not how they feel about you in general.

The right moments to survey

The strongest CSAT deployment points in a post-sale CS context land at moments where a clear interaction has just concluded.

After customer onboarding completion. This is one of the highest-value CSAT moments. The OnRamp 2026 customer satisfaction survey guide recommends surveying 24-48 hours after onboarding wraps, while the experience is still vivid. A low CSAT here is an early warning that the customer's path to value has friction. Nils Vinje, VP of Customer Success at Rainforest QA, puts it directly: the end of onboarding is when the customer has made up their mind about whether your solution solves their problem. If it doesn't, you need to know immediately.

After support ticket resolution. This is the most common CSAT trigger in SaaS, and for good reason. It tells you whether your support team resolved the issue effectively, not just whether they responded quickly. Send the survey within minutes of ticket closure while the interaction is still top of mind.

After training sessions or webinars. If you're investing CSM time in live training, measuring whether customers found it useful is worth the one-question ask. Low CSAT on training often reveals misalignment between what you're teaching and what the customer actually needs.

After QBRs or business reviews. A post-QBR CSAT tells you whether the meeting felt valuable from the customer's perspective, not just whether your team followed the deck. If customers consistently rate QBRs low, the format needs rethinking.

When not to survey

During an active escalation. If a customer is mid-crisis, asking them to rate their satisfaction adds insult to frustration. Wait until the issue is fully resolved.

More than twice per quarter for the same customer. Survey fatigue is a real problem in B2B. Your customers are getting surveyed by every vendor they work with. Every survey you send competes for attention. If you're surveying after every interaction, response rates will crater and the customers who do respond will be the angriest ones, skewing your data.

Without capacity to follow up. Sending a CSAT survey and ignoring the results is worse than not surveying at all. It signals that you're collecting data as a checkbox exercise, not because you intend to act on what you learn.

CSAT benchmarks for B2B SaaS

Retently's 2025 CSAT benchmarks put B2B Software and SaaS companies in the high 70s, with the industry showing strong improvement over recent years. The American Customer Satisfaction Index (ACSI) places software companies at around 76%.

Here's how to read those numbers in context. Anything above 80% is considered excellent in SaaS. Scores between 70-80% are solid but indicate room for improvement at specific touchpoints. Below 70% suggests systemic friction that's likely affecting retention.

But your own trend matters more than industry averages. A team that improves from 68% to 75% over two quarters is making meaningful progress, even if the industry average is 78%. A team sitting at 82% that drops to 76% has a problem worth investigating, even though 76% looks fine in isolation.

The most useful way to track CSAT is by touchpoint, not as a single aggregate number. Your post-support CSAT might be 85% while your post-onboarding CSAT sits at 65%. That aggregate might look like a healthy 75%, but it's hiding a serious onboarding problem. Segment your data by interaction type, and the friction points become visible.

Where CSAT misleads CS teams

CSAT is a valuable tool with real blind spots. Knowing where it misleads you is as important as knowing where it helps.

Satisfaction doesn't equal retention

A customer can rate every support interaction a 5 out of 5 and still churn. Why? Because the product doesn't solve their core problem, or their champion left, or procurement decided to consolidate vendors. CSAT measures how a specific touchpoint landed. It doesn't measure whether the overall relationship is healthy.

This is where CSAT needs to work alongside other signals in your customer health score. A customer with high CSAT but declining product usage is a different risk profile than one with high CSAT and deepening adoption. The health score models that get this wrong typically over-weight sentiment metrics (CSAT, NPS) while under-weighting behavioral ones (login frequency, feature depth).

High CSAT can mask unresolved problems

A support team that resolves tickets quickly and politely will generate high CSAT scores. That's good. But if the same customers keep submitting tickets about the same issue, high CSAT is masking a product or training gap that's quietly draining the customer's patience.

Track CSAT alongside repeat ticket rates. If a customer rates their support experience a 5 but contacts you about the same workflow three times in two months, that's a problem CSAT alone won't surface.

The question shapes the answer

"How satisfied were you with your support experience?" and "How satisfied are you with our product?" measure completely different things, but both get called "CSAT." If your surveys mix these question types, your aggregate score becomes meaningless.

Be precise about what you're measuring. Each CSAT survey should reference the specific interaction: "How satisfied were you with the resolution of ticket #4521?" gives you actionable data. "How satisfied are you overall?" gives you a number that belongs in an NPS survey, not a CSAT one.

CSAT vs. NPS vs. CES: when to use each

CS teams don't need to pick one metric. They need to deploy each one where it adds the most value. The three metrics answer fundamentally different questions.

CSAT NPS CES
Core question "How satisfied were you with this experience?" "How likely are you to recommend us?" "How easy was it to resolve your issue?"
What it measures Satisfaction with a specific interaction Overall loyalty and advocacy likelihood Effort required to complete a task
Scale 1-5 (% of 4s and 5s) 0-10 (% promoters minus % detractors) 1-7 (% of 5s, 6s, and 7s)
Best CS use case After support, onboarding, training, QBRs Quarterly relationship health check After self-service or complex resolution flows
Timing Within minutes to 24 hours of interaction Quarterly or biannually on a set cadence Immediately after task or process completion
Strength Pinpoints which touchpoints work or fail Tracks loyalty trends and benchmarks over time Strongest predictor of repurchase behavior
Blind spot High scores can mask unresolved product gaps Trailing indicator, confirms existing sentiment Narrow scope, only measures ease of resolution

Use all three at different moments. CSAT diagnoses specific touchpoints. NPS tracks overall loyalty. CES identifies friction. Together they build a complete feedback picture.

CSAT answers: "Did this specific interaction meet your expectations?" Use it after support tickets, onboarding milestones, training sessions, and QBRs. It's your diagnostic tool for individual touchpoints.

NPS answers: "How loyal are you to us overall?" Use it on a quarterly or biannual cadence as a relationship temperature check. It feeds into health scores and benchmarking. (See the full NPS glossary entry for calculation details and CS-specific guidance.)

CES (Customer Effort Score) answers: "How easy was it to get what you needed?" Use it after self-service interactions, complex processes, or any moment where effort is the variable that matters. Research shows CES is a stronger predictor of repurchase behavior than either CSAT or NPS, because customers who have to work too hard to get help rarely come back for more.

Together, these three metrics feed a comprehensive Voice of Customer program. CSAT tells you which touchpoints work. NPS tells you how the overall relationship feels. CES tells you where friction lives. The combination gives CS teams what customer sentiment data alone never can: a specific, actionable map of where to invest and where to fix.

Frequently asked questions about CSAT surveys

Q: How do you calculate CSAT?

A: Divide the number of satisfied responses (ratings of 4 or 5 on a 5-point scale) by the total number of responses, then multiply by 100. The result is your CSAT percentage. For example, 80 satisfied responses out of 100 total gives you a CSAT of 80%.

Q: What is a good CSAT score for SaaS?

A: B2B SaaS averages are in the high 70s. Anything above 80% is considered excellent. Below 70% suggests friction worth investigating. Your own trend over time is more meaningful than comparing against industry averages, since benchmarks vary by touchpoint, segment, and survey methodology.

Q: How is CSAT different from NPS?

A: CSAT measures satisfaction with a specific interaction (like a support ticket or onboarding call). NPS measures overall loyalty and likelihood to recommend. CSAT is transactional and triggered by events. NPS is relational and sent on a regular cadence. CS teams need both for a complete picture.

Q: When is the best time to send a CSAT survey?

A: Within 5 minutes to 24 hours after the interaction you're measuring. The experience should be fresh in the customer's mind. The strongest moments are after onboarding completion, support ticket resolution, training sessions, and business reviews. Avoid surveying during active escalations.

Q: How often should you survey customers with CSAT?

A: Tie surveys to specific interactions rather than a fixed schedule. Avoid surveying the same customer more than twice per quarter to prevent fatigue. In-app surveys generate higher response rates (20-30%) than email (10-15%), so choose your channel based on where customers are most likely to respond.

Q: Can CSAT predict churn?

A: Not on its own. CSAT captures satisfaction with individual interactions, which doesn't account for product fit, stakeholder changes, or competitive pressure. Consistently low CSAT across multiple touchpoints can signal friction that contributes to churn over time, but it works best as one input in a broader health score model alongside usage data and NPS.

Q: What scale should CSAT surveys use?

A: The 1-5 scale is most common in SaaS because it's simple and generates high response rates. Some teams use 1-10, emoji scales, or star ratings. The scale matters less than consistency. Pick one format, use it everywhere, and track trends over time rather than obsessing over the absolute number.

Conclusion

CSAT surveys give CS teams the most direct feedback loop available for specific post-sale interactions. When deployed at the right moments and paired with other metrics, they tell you exactly which touchpoints are working and which ones need attention. The teams that get the most from CSAT treat it as a diagnostic tool for individual experiences, not a proxy for overall account health.

Key Takeaways

  • Deploy CSAT after specific interactions (onboarding, support, training, QBRs) rather than on a generic schedule, and keep surveys to one question plus an optional follow-up
  • Track CSAT by touchpoint rather than as a single aggregate number, because a healthy average can mask serious friction at individual stages
  • Pair CSAT with NPS and CES to build a complete view of customer experience, using each metric where it adds the most diagnostic value

What to do in the next 7 days

  1. Audit which post-sale interactions currently trigger a CSAT survey. If you're only surveying after support tickets, identify one additional touchpoint (onboarding completion or QBRs) where adding a survey would give you new diagnostic data.
  2. Segment your last quarter of CSAT data by touchpoint type. Break out support CSAT, onboarding CSAT, and any other categories separately. Look for the touchpoint with the lowest score and investigate what's driving dissatisfaction there specifically.
  3. Check your response rates. If any survey channel is below 10%, the data may not be representative. Test switching from email to in-app delivery for one survey type and compare response rates over two weeks.

Related terms