What we tested β and what the data shows
To understand how AI engines decide which universities to recommend, we submitted 312 higher-education queries to three engines: ChatGPT (GPT-4o), Perplexity and Gemini. The queries covered six categories β business schools, engineering, computer science, communications, private universities and MBAs β across four prompt types: rankings ("best universities for..."), comparisons ("X vs Y"), criteria-based ("university with placements in Sydney") and advisory ("which university should I choose for...").
Across 312 queries, 67 distinct institutions were named at least once. The top 10 captured 58% of all mentions. The remaining 57 shared the other 42%. Hundreds of other institutions simply never appeared (Source: Skolbot GEO monitoring, 312 queries x 3 AI engines, FebβMar 2026).
This is not a league table. It is an extreme concentration effect β certain institutions are recommended systematically while others remain invisible. In Australia, the Group of Eight (Go8) dominates mentions on ChatGPT; on Perplexity, newer universities with strong digital content occasionally break through. Understanding the criteria behind this selection is the first step to changing it.
The 8 AI recommendation criteria, ranked by impact
Criterion 1: Frequency in the training corpus
The most decisive factor is also the hardest to change quickly. Large language models such as GPT-4 and Gemini were trained on hundreds of billions of words. Institutions that appear frequently in that corpus β press articles, league tables, forums, institutional sites β hold a structural advantage.
The University of Melbourne, UNSW Sydney, the Australian National University, the University of Sydney: these names are over-represented in English-language training data. This is a cumulative notoriety effect built over decades of media coverage and application volumes through admissions centres such as UAC, VTAC, QTAC, SATAC and TISC.
But this criterion is eroding. With RAG (Retrieval-Augmented Generation), AI engines supplement their training data with real-time web searches. Perplexity relies heavily on RAG. ChatGPT uses it via Browse mode. Gemini activates it by default. Recent, well-structured content can now compensate for a deficit in the historical corpus.
Criterion 2: Citations on trusted third-party sources
AI engines weight source concordance heavily. If your institution is mentioned by UAC, the QS World University Rankings, THE and the Good Universities Guide, the engine has four converging sources. Each additional source raises the probability of citation.
Institutions mentioned on five or more trusted third-party sources are 3.2x more likely to be cited in an AI response than those mentioned on two or fewer (Source: Skolbot GEO correlation analysis, 120 institutions x 3 AI engines, Feb 2026).
High-value sources for Australian higher education:
- Institutional β UAC, VTAC, QTAC, SATAC, TISC, TEQSA, Department of Education, Study Australia
- Rankings β QS, THE, Financial Times, Good Universities Guide, Shanghai Ranking, ERA (Excellence in Research for Australia)
- Specialist media β THE, Campus Morning Mail, StudyPortals, QILT (Quality Indicators for Learning and Teaching)
- Accreditations β AACSB, EQUIS, AMBA, Engineers Australia, CRICOS registration
Criterion 3: Schema.org structured data
Structured data turns your content into entities that AI engines can extract, verify and cite. The EducationalOrganization, Course, FAQPage and AggregateRating schemas have the greatest impact.
This criterion is explored in depth in our complete guide to structured data for universities. In short: +12 points of GEO visibility on average. It is the most cost-effective lever because it depends on a one-off technical implementation.
Criterion 4: Density of verifiable data
AI engines preferentially cite passages that contain sourced, quantified facts. "A 94% employment rate within four months (QILT Graduate Outcomes 2025)" will be cited before "an excellent employment rate". The reason is technical: the model can cross-check a sourced figure against other sources; it cannot verify a vague claim.
The verifiable data points most exploited by AI engines in education:
- Graduate employment rate β with source (QILT, THE) and year
- Median graduate salary β in $AUD, sourced
- Student numbers β total and per program
- International partnerships β with named partner universities
- Tuition fees β exact annual figure (Commonwealth Supported Place or full fee)
- League table position β with the ranking and year
Pages containing five or more sourced data points receive 2.7x more AI citations than purely descriptive pages (Source: Skolbot semantic analysis, 800 pages across 120 institutions, Feb 2026).
Criterion 5: Content freshness
A site whose program pages still mention "2024 intake" in March 2026 loses credibility with AI engines. Freshness is a reliability signal, especially for vintage-linked queries ("best universities 2026").
AI engines with RAG capabilities (Perplexity, Gemini, ChatGPT Browse) check the last-modified date. Content updated within the past three months is favoured over content older than 12 months.
The ideal update frequency for program pages is quarterly. For blog and news content, fortnightly publication maintains a consistent freshness signal.
Criterion 6: Snippet-first content structure
AI engines extract passages, not entire pages. A paragraph of 40 to 80 words that directly answers a question is more likely to be cited than a discursive 300-word block.
This approach β known in GEO circles as "snippet-first writing" β rests on three principles:
- Each H2 opens with a direct answer before elaborating
- Bullet-point lists are preferred citation targets for AI engines
- Short paragraphs (two to three sentences) with a sourced fact are the optimal extraction unit
Criterion 7: E-E-A-T profile (Experience, Expertise, Authoritativeness, Trustworthiness)
Google formalised the E-E-A-T criteria in its Quality Rater Guidelines. AI engines draw on them indirectly. A page whose author is identified β with a bio, publications, verifiable expertise β is treated as more reliable than an anonymous page.
For an Australian university, E-E-A-T is built through:
- Experience β Dated, named student and alumni testimonials
- Expertise β Contributions by identified academics, published research
- Authoritativeness β Accreditations, QS/THE rankings, ERA ratings, partnerships with recognised bodies
- Trustworthiness β HTTPS, privacy policy compliant with the Privacy Act 1988, verifiable contact details
Criterion 8: Topical coherence of the site
AI engines evaluate the thematic consistency of a site. A university blog that scatters across off-topic subjects dilutes its authority. Topical authority is built through depth, not breadth. A cluster of 15 articles on business school admissions carries more weight than one article each on 15 different subjects.
How ChatGPT, Perplexity and Gemini differ
The three engines do not work the same way, and their recommendations diverge significantly.
ChatGPT: training-corpus weight
ChatGPT relies primarily on its training corpus (up to April 2024 for GPT-4o). Browse mode adds a RAG layer, but the corpus remains dominant. Consequence: ChatGPT favours historically high-profile institutions. 58% of its mentions concentrate on 10 institutions (Source: Skolbot GEO monitoring).
For a mid-tier Australian university, ChatGPT is the hardest engine to crack. The strategy: maximise presence in sources ChatGPT indexes during corpus updates β Wikipedia, international rankings, English-language media.
Perplexity: RAG-first
Perplexity runs a live web search for every query and cites its sources. It is the most responsive engine to content changes and the most sensitive to structured data. Institutions with complete Schema.org markup are 47% more likely to be cited by Perplexity than by ChatGPT (Source: Skolbot GEO monitoring).
For a mid-tier Australian university, Perplexity is the most accessible AI engine. Rich, well-structured, regularly updated content can get you cited within weeks.
Gemini: the Google Search connection
Gemini natively integrates Google Search data, including rich results and the Knowledge Graph. If your institution has a complete Google Business Profile and Schema.org markup, Gemini already knows about you. It is the engine that leverages Google reviews and local data most heavily.
How to improve your institution's visibility in AI recommendations
Immediate-impact actions (one to two weeks)
- Implement Schema.org markup β EducationalOrganization, Course, FAQPage on all program pages. Technical guide in our article on structured data for universities
- Update dates β Replace all outdated year references with the current year
- Enrich with verifiable data β Add employment rate, median salary, student numbers, tuition fees to every program page
Medium-term actions (one to three months)
- Audit your third-party mentions β Verify that your institution is correctly listed on UAC/VTAC/QTAC, QS, THE, QILT, Study Australia, Google Business
- Restructure content snippet-first β Rewrite key paragraphs in a 40β80-word question-answer format
- Build E-E-A-T content β Articles signed by named academics, dated alumni testimonials
Long-term actions (three to six months)
- Develop an external mention strategy β Specialist press relations, ranking participation, contributions to institutional sites
- Build a topical cluster β 10 to 15 blog articles targeting your prospects' queries, interlinked and regularly updated
- Monitor your GEO visibility β Test your prospects' typical queries monthly on ChatGPT, Perplexity and Gemini. Our ChatGPT visibility diagnostic provides a reproducible methodology
FAQ
Can an Australian university influence ChatGPT's answers?
Yes, but not through direct manipulation. ChatGPT cannot be "optimised" like a search engine. However, the factors that feed its answers β presence in trusted sources, structured data, rich and verifiable content β are all actionable. The effect is indirect but measurable. Australian institutions registered with TEQSA and listed across UAC, QS and THE have a strong foundation to build on.
Why does my university appear in Perplexity but not in ChatGPT?
Perplexity performs a live web search for every query, making it sensitive to recent content changes. ChatGPT relies more heavily on its historical training corpus. If your institution has improved its content recently, Perplexity detects it first. ChatGPT will follow at its next corpus update.
Do Google reviews affect AI recommendations?
Yes, particularly for Gemini, which natively integrates Google Business data. ChatGPT and Perplexity access them indirectly via the web. Recent, detailed, positive reviews constitute a trust signal. An insufficient volume of reviews (fewer than 50) or a low rating (below 3.5/5) can harm your visibility.
How long does it take to appear in AI recommendations?
RAG-based AI engines (Perplexity, Gemini) react within two to four weeks to content changes. ChatGPT is slower because it depends on training-corpus updates (several months). A complete GEO strategy produces its first visible results in six to eight weeks, with a cumulative effect over six months.
Do internationally oriented Australian universities have a GEO advantage?
Australian institutions benefit from a natural advantage: English is the dominant language of AI training corpora, and Australia is a top-five destination for international students globally. With programs marketed through Study Australia and Austrade, institutions already have strong English-language presence. For queries in other languages, content in that language remains prioritised. Optimal strategy: program pages in both English and the target language, each with its own Schema.org markup.



