Most admissions teams measure satisfaction at one point: after enrolment, once the prospect has become a student. By then, the 99.2% of prospects who dropped out have already gone silent — and with them, every signal about why. Measuring satisfaction across the admissions funnel means instrumenting each stage with the right metric, at the right moment, with the right tool.
This guide gives you a 7-stage framework based on data from 30 partner schools in the 2025-2026 cohort, and explains why CSAT, NPS, CES and attributional surveys each belong at a different step.
Why one metric cannot cover the whole funnel
A single post-enrolment survey captures the opinions of the 0.8% who converted, and misses everyone else. Each stage of the admissions funnel has a distinct emotional signature: curiosity at the website visit, effort at the application, anticipation before the open day, relief after enrolment. Using one metric to measure all of them flattens the signal.
The admissions funnel is also where expectations collapse fastest. Visit to first contact: 91% dropout. First contact to application: 64% dropout. Application to open day registration: 42% dropout. Open day registration to attendance: 35% dropout (no-show). Open day attendance to full application: 28% dropout. Application to final enrolment: 18% dropout. Overall conversion from first visit to enrolment: 0.8%. (Source: funnel analysis across 30 partner schools, 2025-2026 cohort.)
If you cannot name the friction at each of those drop-offs, you cannot fix them. Measurement is the diagnostic layer underneath the conversion layer.
The 7-stage satisfaction framework
Every prospect moves through seven measurable moments, from the first visit to the post-enrolment induction. Each moment has a dominant emotion and a matching metric.
Table 1 — 7-stage funnel, metrics, timing and tools
| Stage | Prospect moment | Recommended metric | Timing | Tool |
|---|---|---|---|---|
| 1. Website visit | Discovery, first impression | Micro-CSAT on exit intent | On page exit, <10 seconds | On-site widget (Hotjar, Typeform) |
| 2. Chatbot or form interaction | First contact, information seeking | CSAT (1-5 scale) | Immediately after interaction | In-chat survey, auto-email |
| 3. Brochure or call-back request | Commitment, expectation set | CES (Customer Effort Score) | <24h after request | Single-question email |
| 4. Open day registration | Intent signal | CSAT on registration UX | Immediately post-submission | Thank-you page poll |
| 5. Open day attendance | Experience, peer comparison | NPS (0-10) | 24-72h after the event | Email with one question |
| 6. Full application submission | Effort, anxiety | CES + qualitative | 48h after submission | Short 3-question survey |
| 7. Final enrolment | Decision rationale | Attributional survey | Week 1 after enrolment | 5-7 question deep-dive |
Each column is not decorative. Changing the timing, the metric or the tool changes what you learn. A CSAT asked three days after a chatbot interaction is already noise. An NPS asked during the application is premature.
Which metric at which stage
Picking the right metric per stage is the single biggest lever in a Voice of Customer (VoC) programme. The four metrics measure different things.
CSAT — for transactional interactions
CSAT (Customer Satisfaction Score) asks: "How satisfied were you with this interaction?" on a 1-5 scale. It works for punctual, bounded moments — a chatbot exchange, a form submission, a call-back. It fails when the experience spans hours or days, because respondents average across too many sub-events. Use CSAT at stages 1, 2 and 4.
According to Qualtrics CX research, transactional satisfaction metrics should be collected within 60 minutes of the interaction to preserve recall fidelity. Beyond that, response rates drop and answers become retrospective rationalisation.
NPS — for relational moments after a peak experience
NPS (Net Promoter Score), developed by Fred Reichheld and Bain & Company, asks: "How likely are you to recommend us to a friend or colleague?" on a 0-10 scale. It is relational, not transactional. It belongs at stage 5 (after an open day), at the end of the onboarding week, and once a year for current students.
NPS asked after a chatbot interaction is meaningless — the prospect has nothing to recommend yet. NPS asked after an open day tells you whether the event created advocacy.
CES — for high-effort moments
CES (Customer Effort Score) asks: "How easy was it to complete this task?" on a 1-7 scale. It predicts churn better than satisfaction when the task itself is effortful. Applications, brochure requests with CAPTCHA walls, and long forms are CES territory. Use CES at stages 3 and 6.
Forrester CX benchmarks show that effort scores correlate more strongly with future behaviour than satisfaction scores when the interaction is transactional and cognitively demanding. In admissions, the application is the highest-effort moment of the funnel.
Attributional surveys — for post-decision rationale
At stage 7, after the prospect has enrolled, a 5-7 question attributional survey reconstructs the decision path. Which channels mattered? Which moment tipped the balance? Which objection almost broke the deal? This is qualitative gold, collected while memory is still fresh. UK schools often benchmark their results against the NSS (National Student Survey) and OfS student voice research to calibrate expectations.
Dropout benchmarks to target your investment
Not all drop-offs deserve equal attention. Knowing where you lose the most prospects tells you where measurement pays the highest dividends.
Table 2 — Dropout benchmark per stage (Skolbot 30-school panel)
| Stage transition | Dropout rate | Dominant cause | Where to measure |
|---|---|---|---|
| Visit → first contact | 91% | Friction, lack of trust, generic content | Stage 1 micro-CSAT |
| First contact → application | 64% | Slow response, poor qualification | Stage 2 CSAT |
| Application → open day registration | 42% | No follow-up, unclear value prop | Stage 3 CES |
| Open day registration → attendance | 35% (no-show) | No reminders, low commitment | Stage 4 CSAT |
| Open day attendance → full application | 28% | Event did not convert interest | Stage 5 NPS |
| Application → final enrolment | 18% | Competitor offer, family dynamics | Stage 6 CES + stage 7 attributional |
The heaviest drop-off is the first one — 91% between visit and first contact. A supporting Skolbot benchmark makes this concrete: open days show a 52% no-show rate with no follow-up, versus 14% when a chatbot combined with SMS reminders is deployed (Source: Skolbot benchmark, open-day no-show study 2025-2026). The gap between 52% and 14% is the measurable value of instrumenting stages 3 and 4 properly.
Implementation: 3 questions at the right moment, not 10 at the end
The dominant failure mode in admissions VoC is the "end-of-year mega-survey" — 20 questions emailed in July to everyone who interacted with the school since September. Response rate: 4%. Respondents: mostly enrolled students. Signal: confirmation bias.
Replace it with micro-surveys distributed across the funnel. Here is what works, stage by stage:
- Stage 1 (website visit): One question on exit — "Did you find what you were looking for? yes / partly / no" with an optional free-text box. Takes <5 seconds, captures the 91% who are about to leave.
- Stage 2 (after chatbot interaction): "How would you rate this conversation?" 1-5 stars, plus "What could be better?" Skolbot data suggests response rates above 40% when asked in-chat versus below 8% via email.
- Stage 5 (post open day): Three questions — NPS, "What was the highlight?", "What almost made you leave?" Sent within 48 hours.
- Stage 7 (post enrolment): 5-7 questions. "Which moment made you decide?", "What almost stopped you?", "Where did we compare poorly to [competitor]?" Qualitative answers are the gold.
The cumulative rule of three: ask at most three questions per touchpoint, at most three touchpoints before the decision. More than that and completion rates collapse.
ROI of a Voice of Customer programme in admissions
A VoC programme across the admissions funnel is not a cost centre — it pays back through recovered conversion.
Assume a school with 5,000 website visits a month, an overall 0.8% conversion to enrolment and an average student lifetime value of £45,000. That is 40 enrolees a month at current performance. Suppose stage-by-stage instrumentation identifies the two biggest friction points — slow response time at stage 2, and no follow-up at stage 4 — and that fixing them lifts conversion by only 10%.
That 10% uplift means 4 additional enrolees a month, or 48 a year, worth £2.16 million in lifetime revenue. A full VoC programme — tooling, survey design, analysis time — rarely exceeds £40,000 a year. The ROI ratio sits in the 50x range even under conservative assumptions. Medallia research consistently finds that mature VoC programmes in services deliver between 20x and 100x ROI when the insights are fed back into operational change.
The bottleneck is rarely collection. It is the closed loop: does the insight reach the admissions director within a week, and does it change something? If not, the programme becomes a reporting ritual.
Governance: a VoC programme needs an owner
Data collected and never acted upon erodes trust faster than no data at all. Name one owner — usually the head of admissions or head of prospect experience — accountable for:
- Weekly review of stage 2 and stage 5 scores
- Monthly trend analysis across stages
- Quarterly deep-dive on stage 7 attributional data
- Immediate alerts when any stage drops more than 15% month-on-month
UNESCO Higher Education highlights that student voice is most effective when embedded in a continuous improvement cycle, not a one-off audit. The same principle applies before enrolment.
For a wider view of how these metrics fit into prospect expectations, see our pillar on what Gen Z expects from a school website. For the funnel logic itself, the ideal prospect journey to enrolment gives the full map. To price what each lost prospect actually costs, read the real cost of a lost student prospect. And because parents and students answer these surveys differently, see parents vs students — two journeys, two decision logics.
FAQ
What is the difference between CSAT, NPS and CES?
CSAT measures satisfaction with a specific interaction (1-5 scale). NPS measures the likelihood of recommendation after a relational experience (0-10 scale). CES measures perceived effort to complete a task (1-7 scale). Each belongs to a different funnel stage — transactional, relational and effortful respectively.
How often should we survey prospects without causing fatigue?
The rule of three applies: at most three questions per touchpoint, at most three touchpoints before enrolment. A prospect should never receive more than one survey in 72 hours. Survey fatigue collapses response rates below 10%.
What response rate should we expect from admissions surveys?
In-interaction surveys (post-chatbot, post-form) reach 30-50% response rates. Email surveys sent within 24 hours reach 15-25%. Email surveys sent after 7 days rarely exceed 5%. Timing matters more than question quality.
Can a chatbot collect these metrics automatically?
Yes, and it is the most reliable way to hit stages 1, 2 and 4. A chatbot can trigger a one-question CSAT at the end of every interaction, capture a CES rating after a brochure request, and push an SMS NPS after an open day — with no human effort from the admissions team.
Does measurement alone improve conversion?
No. Measurement is the diagnostic layer. Conversion improves when the insight reaches the admissions director within a week and translates into a concrete action — a faster response SLA, a better follow-up sequence, a clearer landing page. Without the closed loop, a VoC programme becomes a reporting ritual.
Measure prospect satisfaction with Skolbot



