skolbot.AI Chatbot for Schools
ProductPricing
Free demo
Free demo
Criteria AI engines use to recommend higher education institutions
  1. Home
  2. /Blog
  3. /AI visibility
  4. /How AI Engines Recommend a University: The 8 Criteria That Matter
Back to blog
AI visibility9 min read

How AI Engines Recommend a University: The 8 Criteria That Matter

What ChatGPT, Perplexity and Gemini look for when recommending a Canadian university. Authority signals, citations, structured data, reviews and content freshness.

S

Skolbot Team Β· March 5, 2026

Summarize this article with

ChatGPTChatGPTClaudeClaudePerplexityPerplexityGeminiGeminiGrokGrok

Table of contents

  1. 01What we tested β€” and what the data shows
  2. 02The 8 AI recommendation criteria, ranked by impact
  3. Criterion 1: Frequency in the training corpus
  4. Criterion 2: Citations on trusted third-party sources
  5. Criterion 3: Schema.org structured data
  6. Criterion 4: Density of verifiable data
  7. Criterion 5: Content freshness
  8. Criterion 6: Snippet-first content structure
  9. Criterion 7: E-E-A-T profile (Experience, Expertise, Authoritativeness, Trustworthiness)
  10. Criterion 8: Topical coherence of the site
  11. 03How ChatGPT, Perplexity and Gemini differ
  12. ChatGPT: training-corpus weight
  13. Perplexity: RAG-first
  14. Gemini: the Google Search connection
  15. 04How to improve your institution's visibility in AI recommendations
  16. Immediate-impact actions (1–2 weeks)
  17. Medium-term actions (1–3 months)
  18. Long-term actions (3–6 months)

What we tested β€” and what the data shows

To understand how AI engines decide which universities to recommend, we submitted 312 higher-education queries to three engines: ChatGPT (GPT-4o), Perplexity and Gemini. The queries covered six categories β€” business schools, engineering, computer science, communications, private universities and MBAs β€” across four prompt types: rankings ("best universities for..."), comparisons ("X vs Y"), criteria-based ("university with co-ops in Toronto") and advisory ("which university should I choose for...").

Across 312 queries, 67 distinct institutions were named at least once. The top 10 captured 58% of all mentions. The remaining 57 shared the other 42%. Hundreds of other institutions simply never appeared (Source: Skolbot GEO monitoring, 312 queries x 3 AI engines, Feb–Mar 2026).

This is not a league table. It is an extreme concentration effect β€” certain institutions are recommended systematically while others remain invisible. In Canada, the U15 Group of Canadian Research Universities dominates mentions on ChatGPT; on Perplexity, newer institutions with strong digital content occasionally break through. Understanding the criteria behind this selection is the first step to changing it.

The 8 AI recommendation criteria, ranked by impact

Criterion 1: Frequency in the training corpus

The most decisive factor is also the hardest to change quickly. Large language models such as GPT-4 and Gemini were trained on hundreds of billions of words. Institutions that appear frequently in that corpus β€” press articles, rankings, forums, institutional sites β€” hold a structural advantage.

The University of Toronto, McGill, UBC, Waterloo: these names are over-represented in English-language training data. This is a cumulative notoriety effect built over decades of media coverage and application volumes flowing through OUAC and provincial application centres.

But this criterion is eroding. With RAG (Retrieval-Augmented Generation), AI engines supplement their training data with real-time web searches. Perplexity relies heavily on RAG. ChatGPT uses it via Browse mode. Gemini activates it by default. Recent, well-structured content can now compensate for a deficit in the historical corpus.

Criterion 2: Citations on trusted third-party sources

AI engines weight source concordance heavily. If your institution is mentioned by Universities Canada, the QS World University Rankings, THE and the Maclean's University Rankings, the engine has four converging sources. Each additional source raises the probability of citation.

Institutions mentioned on 5+ trusted third-party sources are 3.2x more likely to be cited in an AI response than those mentioned on 2 or fewer (Source: Skolbot GEO correlation analysis, 120 institutions x 3 engines, Feb 2026).

High-value sources for Canadian higher education:

  • Institutional β€” OUAC, provincial application centres, EduCanada, Statistics Canada, Universities Canada
  • Rankings β€” QS, THE, Maclean's University Rankings, Shanghai Ranking, Financial Times
  • Specialist media β€” THE, University Affairs, StudyPortals, SchoolFinder.com
  • Accreditations β€” AACSB, EQUIS, AMBA, CEAB, provincial quality assurance bodies

Criterion 3: Schema.org structured data

Structured data turns your content into entities that AI engines can extract, verify and cite. The EducationalOrganization, Course, FAQPage and AggregateRating schemas have the greatest impact.

This criterion is explored in depth in our complete guide to structured data for universities. In short: +12 points of GEO visibility on average. It is the most cost-effective lever because it depends on a one-off technical implementation.

Criterion 4: Density of verifiable data

AI engines preferentially cite passages that contain sourced, quantified facts. "A 94% employment rate within 6 months (Statistics Canada, National Graduate Survey 2025)" will be cited before "an excellent employment rate". The reason is technical: the model can cross-check a sourced figure against other sources; it cannot verify a vague claim.

The verifiable data points most exploited by AI engines in education:

  • Graduate employment rate β€” with source (Statistics Canada, institutional survey) and year
  • Median graduate salary β€” in $ CAD, sourced
  • Student numbers β€” total and per program
  • International partnerships β€” with named partner universities
  • Tuition fees β€” exact annual figure in $ CAD ($6,000–$30,000 CAD/year is the typical domestic range)
  • Ranking position β€” with the ranking and year

Pages containing 5+ sourced data points receive 2.7x more AI citations than purely descriptive pages (Source: Skolbot semantic analysis, 800 pages across 120 institutions, Feb 2026).

Criterion 5: Content freshness

A site whose program pages still mention "2024 intake" in March 2026 loses credibility with AI engines. Freshness is a reliability signal, especially for vintage-linked queries ("best universities 2026").

AI engines with RAG capabilities (Perplexity, Gemini, ChatGPT Browse) check the last-modified date. Content updated within the past 3 months is favoured over content older than 12 months.

The ideal update frequency for program pages is quarterly. For blog and news content, fortnightly publication maintains a consistent freshness signal.

Criterion 6: Snippet-first content structure

AI engines extract passages, not entire pages. A paragraph of 40 to 80 words that directly answers a question is more likely to be cited than a discursive 300-word block.

This approach β€” known in GEO circles as "snippet-first writing" β€” rests on three principles:

  • Each H2 opens with a direct answer before elaborating
  • Bullet-point lists are preferred citation targets for AI engines
  • Short paragraphs (2–3 sentences) with a sourced fact are the optimal extraction unit

Criterion 7: E-E-A-T profile (Experience, Expertise, Authoritativeness, Trustworthiness)

Google formalised the E-E-A-T criteria in its Quality Rater Guidelines. AI engines draw on them indirectly. A page whose author is identified β€” with a bio, publications, verifiable expertise β€” is treated as more reliable than an anonymous page.

For a university, E-E-A-T is built through:

  • Experience β€” Dated, named student and alumni testimonials
  • Expertise β€” Contributions by identified academics, published research
  • Authoritativeness β€” Accreditations, rankings, partnerships with recognised bodies
  • Trustworthiness β€” HTTPS, privacy policy, verifiable contact details

Criterion 8: Topical coherence of the site

AI engines evaluate the thematic consistency of a site. A university blog that scatters across off-topic subjects dilutes its authority. Topical authority is built through depth, not breadth. A cluster of 15 articles on business school admissions carries more weight than one article each on 15 different subjects.

How ChatGPT, Perplexity and Gemini differ

The three engines do not work the same way, and their recommendations diverge significantly.

ChatGPT: training-corpus weight

ChatGPT relies primarily on its training corpus (up to April 2024 for GPT-4o). Browse mode adds a RAG layer, but the corpus remains dominant. Consequence: ChatGPT favours historically high-profile institutions. 58% of its mentions concentrate on 10 institutions (Source: Skolbot GEO monitoring).

For a mid-tier Canadian university, ChatGPT is the hardest engine to crack. The strategy: maximise presence in sources ChatGPT indexes during corpus updates β€” Wikipedia, international rankings, English-language media.

Perplexity: RAG-first

Perplexity runs a live web search for every query and cites its sources. It is the most responsive engine to content changes and the most sensitive to structured data. Institutions with complete Schema.org markup are 47% more likely to be cited by Perplexity than by ChatGPT (Source: Skolbot GEO monitoring).

For a mid-tier Canadian university, Perplexity is the most accessible AI engine. Rich, well-structured, regularly updated content can get you cited within weeks.

Gemini: the Google Search connection

Gemini natively integrates Google Search data, including rich results and the Knowledge Graph. If your institution has a complete Google Business Profile and Schema.org markup, Gemini already knows about you. It is the engine that leverages Google reviews and local data most heavily.

How to improve your institution's visibility in AI recommendations

Immediate-impact actions (1–2 weeks)

  • Implement Schema.org markup β€” EducationalOrganization, Course, FAQPage on all program pages. Technical guide in our article on structured data for universities
  • Update dates β€” Replace all outdated year references with the current year
  • Enrich with verifiable data β€” Add employment rate, median salary, student numbers, tuition fees to every program page

Medium-term actions (1–3 months)

  • Audit your third-party mentions β€” Verify that your institution is correctly listed on OUAC, Universities Canada, QS, THE, Statistics Canada, Google Business
  • Restructure content snippet-first β€” Rewrite key paragraphs in a 40–80-word question-answer format
  • Build E-E-A-T content β€” Articles signed by named academics, dated alumni testimonials

Long-term actions (3–6 months)

  • Develop an external mention strategy β€” Specialist press relations, ranking participation, contributions to institutional sites
  • Build a topical cluster β€” 10–15 blog articles targeting your prospects' queries, interlinked and regularly updated
  • Monitor your GEO visibility β€” Test your prospects' typical queries monthly on ChatGPT, Perplexity and Gemini. Our ChatGPT visibility diagnostic provides a reproducible methodology
Test your school's AI visibility for free Explore more strategies for AI visibility in higher education

FAQ

Can a university influence ChatGPT's answers?

Yes, but not through direct manipulation. ChatGPT cannot be "optimised" like a search engine. However, the factors that feed its answers β€” presence in trusted sources, structured data, rich and verifiable content β€” are all actionable. The effect is indirect but measurable.

Why does my university appear in Perplexity but not in ChatGPT?

Perplexity performs a live web search for every query, making it sensitive to recent content changes. ChatGPT relies more heavily on its historical training corpus. If your institution has improved its content recently, Perplexity detects it first. ChatGPT will follow at its next corpus update.

Do Google reviews affect AI recommendations?

Yes, particularly for Gemini, which natively integrates Google Business data. ChatGPT and Perplexity access them indirectly via the web. Recent, detailed, positive reviews constitute a trust signal. An insufficient volume of reviews (fewer than 50) or a low rating (below 3.5/5) can harm your visibility.

How long does it take to appear in AI recommendations?

RAG-based AI engines (Perplexity, Gemini) react within 2 to 4 weeks to content changes. ChatGPT is slower because it depends on training-corpus updates (several months). A complete GEO strategy produces its first visible results in 6 to 8 weeks, with a cumulative effect over 6 months.

Do internationally oriented universities have a GEO advantage?

Institutions with English-language content benefit from an advantage in English-language corpora β€” and since the majority of AI training data is in English, this effect is substantial. For queries in French (particularly relevant in Quebec and bilingual provinces), content in French remains prioritised. Optimal strategy: program pages in both English and French, each with its own Schema.org markup.

Related articles

Diagnostic of a university's visibility on ChatGPT and AI engines
AI visibility

Is Your University Visible on ChatGPT? A 5-Step Diagnostic

GEO guide for schools: how to appear in AI engine answers like ChatGPT and Perplexity
AI visibility

GEO for schools: how to appear in AI answers

Schema.org structured data for university visibility in AI search engines
AI visibility

Structured Data for Universities: Boost Your AI Visibility with Schema.org

Back to blog

GDPR Β· EU AI Act Β· EU hosting

skolbot.

SolutionPricingBlogCase StudiesCompareAI CheckFAQTeamLegal noticePrivacy policy

Β© 2026 Skolbot