Why this diagnostic is urgent
Your future students no longer start their research on Google. In 2026, 41% of 16-to-24-year-olds use an AI engine (ChatGPT, Perplexity, Gemini) as their first point of contact when researching post-secondary education (Source: survey data, Jan 2026, 4,200 high school juniors, seniors and undergraduates). That figure was 12% in 2024. The shift is underway, and it is fast.
The question is no longer whether AI engines influence student recruitment. It is whether your institution appears in their answers β or whether only your competitors do.
This diagnostic takes 30 minutes, requires no paid tools and produces a prioritized correction plan.
Step 1: Test your branded queries
Branded queries are the most basic: the prospect types your institution's name directly into an AI engine. If the AI does not know you by name, the problem is serious.
The 3 prompts to test
Submit these three prompts to ChatGPT, Perplexity and Gemini (9 tests total):
- "What do you know about [your university]?" β The engine should return: full name, location, degree types, accreditations, general positioning
- "[Your university] student reviews" β The engine should cite student feedback, scores or testimonials
- "[Your university] tuition and graduate outcomes" β The engine should provide concrete figures
Scoring grid
For each response, score on 4 points:
| Criterion | 0 points | 1 point |
|---|---|---|
| Institution named correctly | Not mentioned or name wrong | Exact name |
| Information is accurate | Factual errors | Correct data |
| Accreditations cited | Absent | At least one cited |
| Verifiable figures provided | No figures | At least one sourced figure |
Score /12 per engine (3 prompts x 4 criteria). A score below 6 on any engine means your institution is poorly referenced in its corpus. A score of 0 means you are invisible.
Average score observed across 50 institutions tested: 4.2/12 on ChatGPT, 5.8/12 on Perplexity, 3.1/12 on Gemini (Source: Skolbot GEO diagnostic, panel of 50 institutions, Feb 2026). Ivy League and R1 research universities average 7.1/12. Regional and smaller private institutions average 2.8/12.
Step 2: Test your generic queries
Generic queries are the most strategic. The prospect is not searching for your institution specifically β they are searching for "the best business school in Chicago" or "an MBA with internships." These are the queries where the visibility battle is fought.
The 5 prompts to test
Adapt these prompts to your context (city, discipline, level):
- "What are the best [type of institution] in [city/state]?" β Example: "What are the best business schools in Boston?"
- "What should I major in to work in [field]?" β Example: "What should I major in to work in data science?"
- "[Type of institution] with co-ops or internships in [city/region]" β Example: "Engineering colleges with co-ops in the Midwest"
- "Comparison [your university] vs [competitor]" β Example: "Michigan vs Wisconsin"
- "Reviews of [type of program] in the US for international students" β Example: "Reviews of MBA programs in the US for international students"
Scoring grid
For each prompt, score:
| Criterion | Score |
|---|---|
| Your institution is mentioned | 2 points |
| Your institution is in the top 3 recommendations | 1 bonus point |
| Information about your institution is accurate | 1 point |
| A differentiating attribute is cited (accreditation, specialism, price) | 1 point |
Maximum score: 20 points (5 prompts x 4 points). A score below 5 means your institution is absent from AI recommendations for its strategic queries.
Across 50 institutions tested, 72% score 0 on ChatGPT's generic queries β they are simply never mentioned (Source: Skolbot GEO diagnostic, Feb 2026). On Perplexity, that figure drops to 54%, confirming that Perplexity is more permeable to recent content.
Step 3: Audit your structured data
Schema.org structured data is the most actionable technical lever. This step takes 5 minutes per page.
The 3-click test
- Open the Google Rich Results Test
- Enter your homepage URL, then a program page URL
- Check for the following schemas:
| Schema | Present? | GEO impact |
|---|---|---|
| EducationalOrganization | yes/no | Critical β identifies your institution as an entity |
| Course | yes/no | High β makes each program citable |
| FAQPage | yes/no | High β provides extractable answers |
| AggregateRating | yes/no | Moderate β verifiable social proof |
If none of these schemas are detected, your site is technically invisible to AI engines. This is the case for 82% of institutions analyzed (Source: Skolbot technical audit, 120 institutions, Jan 2026).
To implement these schemas, our guide to structured data for universities details the process with JSON-LD code examples.
Step 4: Evaluate your verifiable data density
AI engines cite facts, not slogans. This step assesses the richness of verifiable data on your key pages.
The entity-counting method
Open your 5 most visited pages (homepage, main program page, admissions page, tuition page, student life page) and count for each:
- Sourced figures β Employment rate, salary, enrollment numbers, ranking position, with a verifiable source
- Named entities β Accreditations (AACSB, regional accreditors), organizations (US Department of Education, IPEDS/NCES), rankings (US News, QS, THE), named partners
- Precise dates β Fall 2026 intake, NACE First Destination Survey 2025, US News Rankings 2026
Scoring
| Verifiable data per page | Level |
|---|---|
| 0-2 | Critical β content too generic for AI |
| 3-5 | Insufficient β some signals but not enough |
| 6-10 | Adequate β exploitable base for AI engines |
| 10+ | Excellent β high density, strong citation probability |
The observed median is 2.3 verifiable data points per page across university websites (Source: Skolbot semantic analysis, 800 pages from 120 institutions, Feb 2026). The top 10 GEO institutions show a median of 8.7 verifiable data points per page.
The gap is considerable. It alone explains why some institutions are systematically cited while others are systematically ignored.
Step 5: Map your external mentions
AI engines cross-reference sources. The more your institution is mentioned on trusted third-party sites, the more it is considered notable and reliable.
The 12-source checklist
Check whether your institution is listed (with current information) on each of these sites:
| Source | Type | Verified? |
|---|---|---|
| Common App | Application platform | yes/no |
| US Department of Education | Federal agency | yes/no |
| IPEDS/NCES | Federal statistical data | yes/no |
| QS World University Rankings | Ranking | yes/no |
| THE World University Rankings | Ranking | yes/no |
| US News & World Report | Ranking | yes/no |
| Niche.com | Review platform | yes/no |
| CHEA / Regional accreditor | Accreditation | yes/no |
| Google Business Profile | Local | yes/no |
| Wikipedia (dedicated article) | Encyclopedia | yes/no |
| LinkedIn (institution page) | Professional network | yes/no |
| AACSB / EQUIS / ABET | Programmatic accreditation | yes/no |
Scoring
| Sources confirmed | Level |
|---|---|
| 0-3 | Critical β minimal visibility |
| 4-6 | Insufficient β efforts needed |
| 7-9 | Adequate β solid base |
| 10-12 | Excellent β high AI trust profile |
Institutions present on 7+ third-party sources are 3.2x more likely to be cited by an AI engine than those on 3 or fewer (Source: Skolbot GEO correlation analysis, 120 institutions, Feb 2026).
Diagnostic summary: your overall score
Add your scores across the 5 steps to get your AI visibility profile:
| Step | Max score | Your score |
|---|---|---|
| 1. Branded queries | 12 | __ /12 |
| 2. Generic queries | 20 | __ /20 |
| 3. Structured data | 4 schemas | __ /4 |
| 4. Data density | 10+ per page | __ (median) |
| 5. External mentions | 12 sources | __ /12 |
Interpretation
- Profile A (high scores throughout) β Well positioned. Maintain freshness and monitor quarterly
- Profile B (strong on brand, weak on generic) β The AI knows you but does not recommend you. Work on structured content and verifiable data
- Profile C (low throughout except mentions) β Your reputation exists but your site does not reflect it. Priority: Schema.org
- Profile D (low throughout) β Full overhaul needed. The plan below is your roadmap
Prioritized correction plan
Priority 1 β Week 1: the technical foundation
Implement Schema.org (EducationalOrganization, Course, FAQPage) on your key pages. A developer can do this in 3 to 5 days.
Priority 2 β Week 2: content enrichment
Add verifiable data to your 5 most visited pages: sourced employment rate, median salary, named accreditations. Target: 8+ verifiable data points per page.
Priority 3 β Week 3: structured FAQs
Create marked-up FAQs on your admissions and program pages. Answer the most common questions prospects ask.
Priority 4 β Weeks 4-8: external mentions
Update your listings on Common App, IPEDS/NCES, US News, QS, THE, and your regional accreditor's directory. Complete your Google Business Profile and encourage student reviews on Niche.com and Google.
Priority 5 β Ongoing: freshness
Quarterly update of program pages. Two blog posts per month minimum.
For a deep understanding of GEO strategy in higher education, our complete GEO guide for universities covers the 5 pillars of AI visibility. And for ROI calculation on these actions, see our student chatbot ROI methodology.
Test your school's AI visibility for free Explore more strategies for AI visibility in higher educationFAQ
Does this diagnostic work for all types of US institutions?
Yes. The methodology applies to liberal arts colleges, research universities, community colleges, business schools, engineering schools, and for-profit institutions. The test queries should be adapted to your discipline and geography, but the scoring grid is universal.
How often should I repeat this diagnostic?
A full diagnostic per quarter is sufficient. A lighter check (generic queries only) can be done monthly. AI engines update their models and indices continuously, but significant visibility changes take 4 to 8 weeks to materialize.
My score is low on ChatGPT but adequate on Perplexity. What should I do?
Perplexity reacts fast thanks to real-time RAG. ChatGPT depends on its historical corpus. Focus on the levers that impact both: Schema.org, verifiable data, third-party mentions. ChatGPT will catch up at its next corpus update.
Can I run this diagnostic on my competitors?
Yes, and it is recommended. Test the same queries and note which competitors appear. This identifies the attributes AI engines retain about them but not about you.



