TL;DR
- Customer Effort Score (CES) measures how easy it was for a customer to get their issue resolved: a 1-7 Likert scale where 7 means "Strongly Agree" the interaction was effortless.
- CES belongs on Salesforce Case records, not Contacts: it measures a specific interaction's friction, not the overall relationship.
- Trigger CES surveys within 1 hour of case closure using Salesforce Flow for the most accurate responses.
- CES predicts repeat contacts and churn more reliably than CSAT in most support operations: it catches process problems that satisfaction scores mask.
- Pair CES with CSAT on cases and NPS on accounts for a complete Salesforce feedback program.
Most Salesforce support teams measure CSAT. Many track NPS. Almost none run CES on their cases.
That's a problem, because CES is the metric that actually predicts whether a resolved ticket generates a follow-up ticket next week. A customer can rate their satisfaction 4 out of 5 and still never want to go through that process again. CSAT caught the outcome. CES catches the experience of getting there.
We've deployed CES programs across Salesforce support operations handling anywhere from 500 to 15,000 cases a month. The teams that add CES alongside CSAT consistently find issues they didn't know existed: unnecessary department transfers, multi-touch resolutions for problems that should take one contact, and "successful" case closures that left customers exhausted. For a broader view of how survey types work together in Salesforce, see our complete Salesforce survey guide.
This guide covers what CES is in a Salesforce context, when to trigger it, how to set it up with Flow and field mapping, and what CES data reveals that CSAT alone can't.
What Is Customer Effort Score (CES) in Salesforce?
Customer Effort Score is a service metric that measures how easy it was for a customer to complete a specific interaction with your business. The standard CES 2.0 question asks customers to rate their agreement with the statement: "The company made it easy for me to handle my issue." Responses fall on a 1-7 Likert scale, where 1 is "Strongly Disagree" and 7 is "Strongly Agree."
The metric has an interesting origin. The Corporate Executive Board (now Gartner) introduced CES in 2008 after their research found that reducing customer effort was a stronger predictor of loyalty than exceeding expectations. The Harvard Business Review popularized this finding in 2010 with an article arguing that companies should stop trying to delight customers and start making things easier instead.
In simple terms: customers don't leave because you weren't impressive enough. They leave because you made them work too hard.
The original CES used a 5-point scale, but respondents frequently misinterpreted higher scores as "more effort" rather than "better experience." CEB addressed this in 2013 with CES 2.0, switching to the 7-point agree/disagree format that's now standard. For the complete CES methodology and question design, see our Customer Effort Score guide.
In a Salesforce context, CES belongs on the Case record. Not the Contact. CES measures friction in a specific interaction, so the data needs to live alongside the case type, the assigned agent, the resolution time, and the queue that handled it. That's what makes the score usable for your support lead.
CES vs CSAT vs NPS: When to Use Each in Salesforce
These three metrics get lumped together constantly, but they measure fundamentally different things and belong at different moments in your Salesforce workflow.
| CES | CSAT | NPS | |
| Question | "The company made it easy for me to handle my issue" | "How satisfied were you with your experience?" | "How likely are you to recommend us?" |
| Scale | 1-7 (Agree/Disagree) | 1-5 (Satisfaction) | 0-10 (Likelihood) |
| Best Trigger | Post-case close, post-self-service | Post-interaction (any type) | Quarterly, post-onboarding, pre-renewal |
| Maps To | Case | Case | Contact / Account |
| Measures | Friction / Effort | Satisfaction | Loyalty / Advocacy |
| Predicts | Repeat contacts, process friction | Short-term sentiment | Long-term retention, referral |
The mental model that actually works: CSAT tells you whether the customer was happy with the outcome. CES tells you whether the process of reaching that outcome was painful. NPS tells you whether the overall relationship is healthy enough for the customer to recommend you.
In our Salesforce deployments, teams that run CES alongside CSAT surveys in Salesforce catch process problems that CSAT alone misses entirely. A case can close with a 4/5 CSAT and a 2/7 CES. That's a customer who got their answer but hated the journey. CSAT says "fine." CES says "fix this before they call back."
For relationship-level measurement, NPS surveys in Salesforce handle that separately, mapped to Contact and Account records rather than Cases.
When to Deploy CES Surveys in Salesforce
CES works best when you can tie it to a specific interaction the customer just completed. The closer the survey is to the experience, the more accurate the response. Here are the three scenarios where CES adds the most value in a Salesforce environment.
1. Post-Case Closure
This is the primary use case, and it's where most teams should start.
The trigger: Case.Status changes to "Closed" in Salesforce Flow. The CES survey fires within 30-60 minutes. It maps directly to the Case record, which means you can report on CES by case type, by agent, by queue, and by resolution time. If you're already running helpdesk feedback surveys after ticket closure, CES slots into the same workflow with a different question.
Why within an hour? Because customers remember the friction while it's fresh. By the next day, they remember the outcome but forget the process. And the process is exactly what CES is designed to measure.
One question plus one conditional follow-up for low scores. That's the format. Every additional question reduces completion by 15-20% in our experience.
2. After Self-Service Interactions
A customer reads a knowledge article and resolves their issue without creating a ticket. That's a deflection win. But was it easy?
CES after self-service interactions catches friction that CSAT never would, because the customer technically "resolved" their problem. They just spent 20 minutes finding the right article and reading through three wrong ones first. The outcome was fine. The effort was not.
Same applies to chatbot resolutions. The bot answered the question, but did the customer have to rephrase it four times? CES captures that.
3. After Complex Multi-Step Processes
Onboarding workflows, claims processing, account configuration changes, escalations that required multiple department hand-offs: these are high-effort interactions by nature.
CES identifies which steps create the most friction. If your onboarding CES is 3.2 but your post-case CES is 5.6, you know exactly where to invest. The onboarding process needs simplification. Your case resolution is working.
How to Set Up CES Surveys in Salesforce
The setup follows five steps: connect, build, trigger, map, alert. Each step has a "what you're doing and why" to keep the implementation focused.
Step 1: Connect Your Survey Platform to Salesforce
Install the Zonka Feedback Salesforce package from AppExchange. Grant field-level access to the Salesforce objects you'll be mapping CES data to (Cases at minimum, Contacts if you also want customer-level trending). Set user permissions so the integration can read Case records and write survey response data back.
Step 2: Build the CES Survey
Use the CES 2.0 question format: "The company made it easy for me to handle my issue" on a 1-7 Likert scale.
Add one conditional follow-up question for scores of 3 or below: "What made this difficult?" Open text. No multiple choice. You want the customer's words, not your predetermined categories.
That's the entire survey. Two questions maximum. Resist the urge to add more. The goal is measuring effort, not running a full feedback program on a single case.
Step 3: Configure Case-Close Triggers in Salesforce Flow
Build a Record-Triggered Flow on the Case object. The entry criteria: Case.Status = "Closed." Add a Scheduled Path with a 30-60 minute delay (sending immediately after closure feels intrusive; waiting a day loses accuracy).
Add an audience filter to prevent survey fatigue: exclude any Contact who received a survey in the last 30 days. You can do this with a formula field on the Contact record that stores the last survey date, and a Flow condition that checks it before sending.
For detailed automation patterns, Salesforce's Flow Builder documentation covers the technical setup.
Step 4: Map CES Scores to the Right Salesforce Objects
This is where most teams make their first mistake: mapping to the wrong object.
Map to Case (primary): This is the default for CES. The score attaches to the specific interaction, which enables reporting by case type, agent, queue, and resolution time. Without this mapping, your CES data is just a number with no operational context.
Map to Contact (secondary): If you want to track how individual customers experience effort over time, add a secondary mapping to the Contact record. This lets you identify customers who consistently report high effort across multiple cases.
Zonka supports both custom and managed mapping, including lookup field mapping with reference IDs for 1-to-1 object relationships. The mapping types guide covers which approach fits your program.
Step 5: Set Up Low-CES Alerts
A CES score of 3 or below on a 7-point scale is a signal that something went wrong in the process. Don't just log it.
Configure a Flow that auto-creates a follow-up Task when a low CES response comes in. Assign it to the case owner or their supervisor, depending on your team structure. Set a 48-hour SLA for follow-up. The same closed-loop feedback principles that apply to NPS detractor follow-up work here: detect, route, recover, measure.
Build a weekly CES roll-up report for your support lead: average CES by case type, by queue, and by agent. The trends matter more than individual scores. A dip in CES for a specific case type is a process signal. A dip for a specific agent is a coaching signal.
What CES Data Tells You That CSAT Doesn't
This is the section that matters most, because it's the reason to run CES at all. If CSAT already told you everything you needed to know, CES would be redundant. It's not.
According to Gartner's research on customer effort, 96% of customers who reported high-effort service interactions became more disloyal. Not "somewhat less loyal." Actively disloyal. The same research found that CES is 1.8 times more predictive of customer loyalty than satisfaction scores.
In simple terms: CSAT measures whether your customer was happy with the result. CES measures whether they'll come back the next time they have a problem, or just switch to a competitor where it's easier.
Here's where it gets practical. CES identifies process problems. CSAT identifies people problems. That distinction changes what you fix.
When CES is low but CSAT is fine: The customer got their answer, so they rated satisfaction a 4/5. But they were transferred between three departments to get there. CSAT looks acceptable. CES exposes the hand-off friction that will generate a repeat contact next month. The fix isn't agent performance training. It's routing logic.
When CES reveals multi-touch resolution patterns: Cases requiring three or more interactions to resolve almost always score low on CES, even when the final resolution earns a decent CSAT. We've seen support operations where CSAT averaged 4.2/5 across the board, but CES revealed that 30% of cases involved unnecessary escalations or transfers. The satisfaction number looked healthy. The process was broken underneath it.
When CES predicts repeat contacts: High-effort resolutions generate repeat contacts at 2-3x the rate of low-effort ones. The customer's issue was "resolved," but they didn't fully understand the solution, or the resolution created a new problem. CSAT can't distinguish between a clean resolution and a technically-correct-but-confusing one. CES can.
The most valuable signal: when CES and CSAT disagree. A case with high CSAT and low CES means the customer liked the outcome but hated the journey. A case with low CSAT and high CES means the process was smooth but the answer wasn't what they wanted. Both patterns require different interventions. Without CES, you'd treat both as generic "low satisfaction" and miss the root cause entirely.
Support teams that track CES by case type in Salesforce reports start seeing patterns within the first month: which workflows create friction, which self-service paths fall short, which types of issues consistently require more effort than they should. That's operational intelligence that CSAT scores alone never surface.
CES Benchmarks and What "Good" Looks Like
A general target on the 7-point CES 2.0 scale: an average CES score of 5 or above. Anything above 5 means most customers agree the interaction was easy. Below 4 and you have a systemic effort problem.
But here's what we tell every team we work with: your trajectory matters more than the number itself.
A CES of 4.8 that was 3.9 six months ago tells a better story than a static 5.2 that hasn't moved. Month-over-month improvement is the metric your leadership should track, not comparison against industry averages that may use different scales and question formats.
That said, some useful context by scenario:
Simple inquiries (password resets, basic how-to questions) should score 5.5+. If they don't, your self-service channels have gaps.
Standard case resolutions (product issues, billing questions) typically land between 4.5-5.5 in well-run operations.
Complex escalations (multi-department issues, compliance-related cases) will naturally score lower: 3.5-4.5 is realistic. The goal isn't to make complex cases effortless. It's to reduce unnecessary complexity in the process.
By channel: SMS-triggered CES surveys return higher response rates (35-55%) than email-linked surveys (10-18%), and the scores tend to be more immediate and honest because the response happens faster. If you're choosing a primary channel for CES delivery, SMS after case closure gives you the cleanest data.
Build internal benchmarks first. Compare CES across your own case types, queues, and agent teams before looking at external numbers. The right CES survey tool makes this segmentation automatic rather than manual. According to the Salesforce State of Service Report, 88% of customers say good service makes them more likely to repurchase. CES is how you measure whether your service is actually "good" from the customer's perspective, not just from your resolution-rate dashboard.
Wrapping Up
CES doesn't replace CSAT or NPS. It completes the picture. CSAT tells you what customers feel about the outcome. NPS tells you where the relationship stands. CES tells you whether the experience of working with your support team is easy enough that they'll come back willingly, or hard enough that they'll start looking for alternatives. For most Salesforce support operations, it's the missing metric: the one that turns "we close cases" into "we close cases in a way that builds loyalty." Running it is straightforward. The value compounds from the first month.