Why this diagnostic is urgent
Your future students no longer start their research on Google. In 2026, 41% of 16-to-24-year-olds use an AI engine (ChatGPT, Perplexity, Gemini) as their first point of contact when researching post-secondary education (Source: UCAS/Savanta survey, Jan 2026, 4,200 UK sixth-formers and undergraduates). That figure was 12% in 2024. The shift is underway, and it is fast.
The question is no longer whether AI engines influence student recruitment. It is whether your institution appears in their answers โ or whether only your competitors do.
This diagnostic takes 30 minutes, requires no paid tools and produces a prioritised correction plan.
Step 1: Test your branded queries
Branded queries are the most basic: the prospect types your institution's name directly into an AI engine. If the AI does not know you by name, the problem is serious.
The 3 prompts to test
Submit these three prompts to ChatGPT, Perplexity and Gemini (9 tests total):
- "What do you know about [your university]?" โ The engine should return: full name, location, degree types, accreditations, general positioning
- "[Your university] student reviews" โ The engine should cite student feedback, scores or testimonials
- "[Your university] tuition fees and graduate outcomes" โ The engine should provide concrete figures
Scoring grid
For each response, score on 4 points:
| Criterion | 0 points | 1 point | |-----------|----------|---------| | Institution named correctly | Not mentioned or name wrong | Exact name | | Information is accurate | Factual errors | Correct data | | Accreditations cited | Absent | At least one cited | | Verifiable figures provided | No figures | At least one sourced figure |
Score /12 per engine (3 prompts x 4 criteria). A score below 6 on any engine means your institution is poorly referenced in its corpus. A score of 0 means you are invisible.
Average score observed across 50 institutions tested: 4.2/12 on ChatGPT, 5.8/12 on Perplexity, 3.1/12 on Gemini (Source: Skolbot GEO diagnostic, panel of 50 European institutions, Feb 2026). Russell Group universities average 7.1/12. Post-92 universities average 2.8/12.
Step 2: Test your generic queries
Generic queries are the most strategic. The prospect is not searching for your institution specifically โ they are searching for "the best business school in London" or "an MBA with placements." These are the queries where the visibility battle is fought.
The 5 prompts to test
Adapt these prompts to your context (city, discipline, level):
- "What are the best [type of institution] in [city]?" โ Example: "What are the best business schools in Manchester?"
- "What course should I study to work in [field]?" โ Example: "What course should I study to work in data science?"
- "[Type of institution] with placements in [city/region]" โ Example: "Engineering university with placements in the Midlands"
- "Comparison [your university] vs [competitor]" โ Example: "Warwick vs Bath"
- "Reviews of [type of course] in the UK for international students" โ Example: "Reviews of MBA programmes in the UK for international students"
Scoring grid
For each prompt, score:
| Criterion | Score | |-----------|-------| | Your institution is mentioned | 2 points | | Your institution is in the top 3 recommendations | 1 bonus point | | Information about your institution is accurate | 1 point | | A differentiating attribute is cited (accreditation, specialism, price) | 1 point |
Maximum score: 20 points (5 prompts x 4 points). A score below 5 means your institution is absent from AI recommendations for its strategic queries.
Across 50 institutions tested, 72% score 0 on ChatGPT's generic queries โ they are simply never mentioned (Source: Skolbot GEO diagnostic, Feb 2026). On Perplexity, that figure drops to 54%, confirming that Perplexity is more permeable to recent content.
Step 3: Audit your structured data
Schema.org structured data is the most actionable technical lever. This step takes 5 minutes per page.
The 3-click test
- Open the Google Rich Results Test
- Enter your homepage URL, then a programme page URL
- Check for the following schemas:
| Schema | Present? | GEO impact | |--------|----------|------------| | EducationalOrganization | yes/no | Critical โ identifies your institution as an entity | | Course | yes/no | High โ makes each programme citable | | FAQPage | yes/no | High โ provides extractable answers | | AggregateRating | yes/no | Moderate โ verifiable social proof |
If none of these schemas are detected, your site is technically invisible to AI engines. This is the case for 82% of European institutions (Source: Skolbot technical audit, 120 institutions, Jan 2026).
To implement these schemas, our guide to structured data for universities details the process with JSON-LD code examples.
Step 4: Evaluate your verifiable data density
AI engines cite facts, not slogans. This step assesses the richness of verifiable data on your key pages.
The entity-counting method
Open your 5 most visited pages (homepage, main programme page, admissions page, fees page, student life page) and count for each:
- Sourced figures โ Employment rate, salary, student numbers, ranking position, with a verifiable source
- Named entities โ Accreditations (AACSB, TEF), organisations (OfS, UCAS), rankings (QS, THE), named partners
- Precise dates โ 2026 intake, HESA Graduate Outcomes 2025, QS Ranking 2026
Scoring
| Verifiable data per page | Level | |--------------------------|-------| | 0-2 | Critical โ content too generic for AI | | 3-5 | Insufficient โ some signals but not enough | | 6-10 | Adequate โ exploitable base for AI engines | | 10+ | Excellent โ high density, strong citation probability |
The observed median is 2.3 verifiable data points per page across European university websites (Source: Skolbot semantic analysis, 800 pages from 120 institutions, Feb 2026). The top 10 GEO institutions show a median of 8.7 verifiable data points per page.
The gap is considerable. It alone explains why some institutions are systematically cited while others are systematically ignored.
Step 5: Map your external mentions
AI engines cross-reference sources. The more your institution is mentioned on trusted third-party sites, the more it is considered notable and reliable.
The 12-source checklist
Check whether your institution is listed (with current information) on each of these sites:
| Source | Type | Verified? | |--------|------|-----------| | UCAS | Institutional | yes/no | | OfS | Regulatory | yes/no | | HESA | Statistical | yes/no | | QS World University Rankings | Ranking | yes/no | | THE World University Rankings | Ranking | yes/no | | Complete University Guide | Ranking | yes/no | | WhatUni | Review platform | yes/no | | StudyPortals | International directory | yes/no | | Google Business Profile | Local | yes/no | | Wikipedia (dedicated article) | Encyclopaedia | yes/no | | LinkedIn (institution page) | Professional network | yes/no | | AACSB / EQUIS / TEF | Accreditation | yes/no |
Scoring
| Sources confirmed | Level | |-------------------|-------| | 0-3 | Critical โ minimal visibility | | 4-6 | Insufficient โ efforts needed | | 7-9 | Adequate โ solid base | | 10-12 | Excellent โ high AI trust profile |
Institutions present on 7+ third-party sources are 3.2x more likely to be cited by an AI engine than those on 3 or fewer (Source: Skolbot GEO correlation analysis, 120 institutions, Feb 2026).
Diagnostic summary: your overall score
Add your scores across the 5 steps to get your AI visibility profile:
| Step | Max score | Your score | |------|-----------|------------| | 1. Branded queries | 12 | __ /12 | | 2. Generic queries | 20 | __ /20 | | 3. Structured data | 4 schemas | __ /4 | | 4. Data density | 10+ per page | __ (median) | | 5. External mentions | 12 sources | __ /12 |
Interpretation
- Profile A (high scores throughout) โ Well positioned. Maintain freshness and monitor quarterly
- Profile B (strong on brand, weak on generic) โ The AI knows you but does not recommend you. Work on structured content and verifiable data
- Profile C (low throughout except mentions) โ Your reputation exists but your site does not reflect it. Priority: Schema.org
- Profile D (low throughout) โ Full overhaul needed. The plan below is your roadmap
Prioritised correction plan
Priority 1 โ Week 1: the technical foundation
Implement Schema.org (EducationalOrganization, Course, FAQPage) on your key pages. A developer can do this in 3 to 5 days.
Priority 2 โ Week 2: content enrichment
Add verifiable data to your 5 most visited pages: sourced employment rate, median salary, named accreditations. Target: 8+ verifiable data points per page.
Priority 3 โ Week 3: structured FAQs
Create marked-up FAQs on your admissions and programme pages. Answer the most common questions prospects ask.
Priority 4 โ Weeks 4-8: external mentions
Update your listings on UCAS, OfS, HESA, QS, THE. Complete your Google Business Profile and encourage student reviews.
Priority 5 โ Ongoing: freshness
Quarterly update of programme pages. Two blog posts per month minimum.
For a deep understanding of GEO strategy in higher education, our complete GEO guide for universities covers the 5 pillars of AI visibility. And for ROI calculation on these actions, see our student chatbot ROI methodology.
Explore more strategies for AI visibility in higher educationFAQ
Does this diagnostic work for all types of institutions?
Yes. The methodology applies to business schools, engineering schools, computing, communications, private universities and training providers. The test queries should be adapted to your discipline and geography, but the scoring grid is universal.
How often should I repeat this diagnostic?
A full diagnostic per quarter is sufficient. A lighter check (generic queries only) can be done monthly. AI engines update their models and indices continuously, but significant visibility changes take 4 to 8 weeks to materialise.
My score is low on ChatGPT but adequate on Perplexity. What should I do?
Perplexity reacts fast thanks to real-time RAG. ChatGPT depends on its historical corpus. Focus on the levers that impact both: Schema.org, verifiable data, third-party mentions. ChatGPT will catch up at its next corpus update.
Can I run this diagnostic on my competitors?
Yes, and it is recommended. Test the same queries and note which competitors appear. This identifies the attributes AI engines retain about them but not about you.



