skolbot.AI Chatbot for Schools
ProductPricing
Free demo
Free demo
90-day action plan for UK schools to get cited by ChatGPT and Perplexity AI engines
  1. Home
  2. /Blog
  3. /AI visibility
  4. /90-Day Plan to Get Cited by ChatGPT and Perplexity
Back to blog
AI visibility13 min read

90-Day Plan to Get Cited by ChatGPT and Perplexity

A phased 90-day action plan for UK schools to get cited by ChatGPT and Perplexity — Schema.org, citable content, external mentions and measurement.

S

Skolbot Team · May 13, 2026

Summarize this article with

ChatGPTChatGPTClaudeClaudePerplexityPerplexityGeminiGeminiGrokGrok

Table of contents

  1. 01Why ChatGPT and Perplexity aren't citing your school yet
  2. 02The 4 pillars of AI visibility
  3. 03Phase 1 – Days 1–30: Technical foundations
  4. Allow AI crawlers
  5. Implement Schema.org EducationalOrganization
  6. Audit and fix crawlability issues
  7. Deliverables by Day 30
  8. 04Phase 2 – Days 31–60: Creating citable content
  9. Write answer capsules for every H2
  10. Build data tables for every programme page
  11. Create FAQ pages per programme with FAQPage markup
  12. Publish freshness signals
  13. Deliverables by Day 60
  14. 05Phase 3 – Days 61–90: Amplification and external mentions
  15. Secure and update aggregator profiles
  16. Generate credible sector media mentions
  17. Build cross-links between your content and authority sources
  18. Deliverables by Day 90
  19. 06Measuring results at Day 90

Why ChatGPT and Perplexity aren't citing your school yet

The core reason is structural, not competitive: AI engines cannot cite what they cannot identify. In the UK, only 29% of ChatGPT responses about higher education name a specific institution — Perplexity reaches 38%, against a European average of just 19% (Source: Skolbot GEO Monitoring, 500 queries × 6 countries × 3 AI engines, Feb 2026). The remaining 60–70% of answers are generic summaries with no institutional mention, even when the query is highly specific.

The gap is not caused by a lack of content. UK universities publish enormous volumes of material. The problem is that most of it is optimised for human readers and traditional search engines, not for the extraction mechanisms used by large language models. ChatGPT and Perplexity look for named entities, structured data, verifiable figures and factual density — signals that most admissions web pages do not yet provide.

The good news is that this is a solvable problem. Research from Princeton's NLP Group found that the top Generative Engine Optimisation (GEO) methods improve AI visibility by 30–40% over unoptimised content. The 90-day plan below translates those findings into a sequenced programme any admissions director can run without a technical agency.

For the underlying framework, see our complete GEO guide for schools.

The 4 pillars of AI visibility

Getting cited by ChatGPT and Perplexity requires progress on four distinct fronts. No single pillar is sufficient on its own, but the order of implementation matters: technical foundations must come before content, and content before amplification.

PillarWhat it achievesPrimary tacticsTimeframe
Technical foundationsIdentifies your institution as a verifiable entitySchema.org EducationalOrganization, crawler access, crawlable pagesDays 1–30
Citable contentGives AI engines extractable, factual answersAnswer capsules, data tables, FAQ markupDays 31–60
External mentionsBuilds corroborating authority signalsUCAS, QS, Guardian, OfS profiles; media coverageDays 61–90
MeasurementTracks citation rate and guides iteration20-query test protocol, monthly trackingOngoing from Day 1

This structure reflects how AI engines generate answers. They first establish whether your institution is a known entity (pillar 1), then assess whether your content is citable (pillar 2), then cross-reference that content against external corroboration (pillar 3). Missing any pillar prevents the full citation chain from completing.

Phase 1 – Days 1–30: Technical foundations

Phase 1 removes the blockers that prevent AI engines from even recognising your institution. Without these foundations in place, content improvements in Phase 2 will have limited effect.

Allow AI crawlers

ChatGPT and Perplexity use dedicated crawlers — OAI-SearchBot and PerplexityBot respectively — that are distinct from Googlebot. Many UK universities inadvertently block them via legacy robots.txt rules. Check your robots.txt file and confirm that neither bot is disallowed. If you use a CDN or WAF (Web Application Firewall), verify that rate-limiting rules are not inadvertently blocking these user agents.

This is the fastest single action in the plan. A blocked crawler means zero citations, regardless of content quality.

Implement Schema.org EducationalOrganization

Schema.org's EducationalOrganization markup transforms your institution from an anonymous block of text into a named entity that AI engines can identify and cross-reference. Add JSON-LD markup to your homepage and About page covering at minimum: name, url, logo, address, telephone, foundingDate, accreditation, numberOfStudents and areaServed.

For programme pages, add Course or EducationalOccupationalProgram markup with educationalCredentialAwarded, provider, tuitionInfo, occupationalCategory and applicationDeadline. Google's Structured Data documentation provides the canonical implementation reference. Institutions that implement Schema.org structured data gain an average of +12 visibility points in AI engine responses (Source: Skolbot GEO Monitoring, 500 queries × 6 countries × 3 AI engines, Feb 2026).

Audit and fix crawlability issues

Run a crawl of your key pages using a tool such as Screaming Frog or Sitebulb. Identify pages returning 4xx or 5xx errors, pages blocked by noindex tags, and content hidden behind JavaScript rendering that crawlers cannot access. Programme pages, FAQ pages and key data pages must be crawlable in plain HTML. PDFs behind lead-capture forms are invisible to AI engines and should be replaced with open HTML pages.

Use your ChatGPT visibility diagnostic to establish your baseline citation score before any changes.

Deliverables by Day 30

  • robots.txt updated to allow OAI-SearchBot and PerplexityBot
  • Schema.org EducationalOrganization live on homepage and About page
  • Schema.org Course live on your three highest-traffic programme pages
  • Crawl report with list of blocked or broken pages, prioritised for fixing

Phase 2 – Days 31–60: Creating citable content

Once AI engines can identify your institution, they need content they can extract and cite. Phase 2 is about reformatting and enriching existing content — and creating a small number of purpose-built pages.

Write answer capsules for every H2

An answer capsule is a 40–60 word direct response placed in the first one to two sentences below each H2 heading. It mirrors how an AI engine generates a response: it finds the most concise accurate answer to the implicit question, then adds context. If your H2 reads "Graduate outcomes", the first sentence must state the outcome rate, source and year — not explain why graduate employment matters.

Before: "Our graduates go on to impressive careers at leading firms across the UK and internationally."

After: "93% of the 2024 cohort secured graduate-level employment within six months of completing their degree (HESA Graduate Outcomes 2025, 312 respondents). Median salary at 12 months was £34,500."

The second version contains three verifiable data points. AI engines extract facts, not narratives.

Build data tables for every programme page

Tables are the most extractable format for large language models. A structured table with descriptive column headers and numerical data is far more likely to be cited than a paragraph containing the same information.

Each programme page should contain at least one table covering: duration, annual tuition (home and international separately), entry requirements (UCAS tariff points or equivalent), graduate employment rate, median starting salary, cohort size and key accreditations. Include the source and year for each figure.

Do not lock this data inside downloadable PDFs. AI engines cannot read PDFs behind registration walls, and the Office for Students (OfS) requires key information to be available without barriers to access.

Create FAQ pages per programme with FAQPage markup

Each programme needs a dedicated FAQ page covering the questions prospects actually ask AI engines. Use real search queries as your source: "How much does [programme] cost at [institution]?", "What are the entry requirements for [programme]?", "What careers do [programme] graduates pursue?". Each question and answer pair should be marked up with Schema.org FAQPage JSON-LD.

Aim for 8–12 questions per programme FAQ. Answers should be 50–120 words each — long enough to be informative, short enough to be directly extractable. Avoid marketing language; write as if answering a UCAS adviser's question.

Publish freshness signals

AI engines, particularly Perplexity, weight recency heavily. Publish at least two data-rich pages during Phase 2 with explicit publication dates and current-year figures. Good candidates include: "Graduate Outcomes Class of 2025", "Tuition Fees and Scholarships 2026–27", or "TEF Rating Explained: What It Means for Students". Each page should carry a visible datePublished and dateModified in its Schema.org markup.

For the full content strategy, see our guide on content cited by ChatGPT for schools.

Deliverables by Day 60

  • Answer capsules added to all key programme, outcomes and admissions pages
  • Data tables live on your five highest-priority programme pages
  • FAQPage markup live on at least three programme FAQ pages
  • Two freshness-signal pages published with current-year data

Phase 3 – Days 61–90: Amplification and external mentions

AI engines cross-reference sources. If your institution appears only on its own website, without corroborating mentions on authoritative external platforms, the AI cannot validate your claims. Phase 3 builds the external citation network that gives Phase 1 and Phase 2 work its full effect.

Secure and update aggregator profiles

Verify and update your profiles on the platforms AI engines treat as authoritative corroboration signals for UK higher education:

  • UCAS: Ensure your course listings are complete, accurate and include employability data where the field permits.
  • QS World University Rankings: Claim your institutional profile and ensure all data fields are populated.
  • Guardian University Guide: If you feature, verify that the data the Guardian holds matches your published figures.
  • OfS Register: Confirm your entry is current and includes your TEF rating. AI engines regularly cite the OfS Register as a factual source on UK institutions.
  • QAA: Ensure your Quality Enhancement reports and Enhancement-Led Institutional Review outcomes are accurately described on your website and linked to the QAA website.

Perplexity and ChatGPT regularly pull from these aggregators to corroborate institutional claims. A mismatch between your website and your UCAS listing is a trust signal failure that reduces citation probability.

Generate credible sector media mentions

Pitch data-driven commentary to sector publications: Times Higher Education, The Guardian Higher Education Network, Wonkhe and Tes Higher. The strongest pitches are built around verifiable institutional data: employability outcomes, TEF performance, widening participation statistics, or sector-first initiatives. A single article in Times Higher Education citing your institution by name can materially improve AI engine citation rates within 4–6 weeks of publication.

Avoid opinion pieces without data. AI engines cannot extract a citable claim from an argument; they can extract it from a statistic.

Build cross-links between your content and authority sources

Where your content references your TEF rating, link directly to the OfS register entry for your institution. Where you cite HESA Graduate Outcomes data, link to the HESA data page. Where you reference QAA enhancement reviews, link to your institution's page on the QAA website. These outbound links to tier-1 authorities strengthen the trust graph that AI engines use to assess source reliability.

For a deeper look at the Perplexity-specific amplification strategy, see our Perplexity audit and optimisation guide.

Deliverables by Day 90

  • UCAS, QS, Guardian and OfS profiles verified and updated
  • At least one sector media piece published or in editorial pipeline
  • Outbound authority links added to key pages
  • Internal cross-links connecting FAQ pages, data pages and programme pages

Measuring results at Day 90

At Day 90, run the same 20-query test protocol you used to establish your baseline at Day 1. Submit each query to ChatGPT, Perplexity and Google AI Overviews and record: whether your institution is named, whether the information cited is accurate, and whether a direct link to your site is included (Perplexity only).

Use this scoring grid to compare your Day 1 and Day 90 results:

MetricDay 1 baselineDay 90 targetMeasurement method
Citation rate — branded queriesRecord %>80%Manual query test × 3 engines
Citation rate — generic queriesRecord %>25%Manual query test × 3 engines
Accuracy of cited dataRecord %100%Compare AI output vs published data
Pages cited by PerplexityRecord count>3 distinct pagesPerplexity source panel
Schema.org entities indexed05+ (homepage, 3 programmes, FAQ)Google Rich Results Test

If your branded query citation rate is below 60% at Day 90, return to Phase 1: the entity recognition layer is incomplete. If generic query citation remains below 10%, the content in Phase 2 needs additional data density — more tables, more sourced figures, more answer capsules. For a full AI Overviews perspective on your performance, see our guide to Google AI Overviews in higher education.

Expect meaningful movement within 60–75 days. Perplexity is the most reactive (1–3 weeks for new content), ChatGPT the least (4–8 weeks). Set realistic expectations with your leadership: this is a 90-day programme with compounding returns, not an overnight fix.

Test your school's AI visibility for free

FAQ

How long does it take for Schema.org markup to affect AI citations?

For Perplexity, which queries the live web, structured data changes can produce visible citation improvements within 2–4 weeks. For ChatGPT, which uses a more static training corpus updated in waves, the effect typically takes 4–8 weeks to materialise. Implementing Schema.org in Phase 1 is therefore time-sensitive: the earlier you deploy it, the earlier the clock starts.

Do I need a specialist agency to run this plan?

Not necessarily. The content and aggregator work in Phases 2 and 3 can be managed by an in-house admissions marketing team. The Schema.org implementation in Phase 1 requires either a developer or a CMS plugin that supports JSON-LD structured data — many modern university CMS platforms (WordPress, Drupal, Sitecore) have this capability built in or available via plugin. The robots.txt change takes under 10 minutes.

Is this plan relevant for smaller, specialist institutions as well as Russell Group universities?

Yes — and smaller institutions often see larger relative gains. A Russell Group university is already well-represented in AI engine training data through rankings, press coverage and institutional authority. A specialist provider (a design school, a law school, a conservatoire) competes on niche queries where structured, factual, up-to-date content makes the decisive difference. Perplexity in particular surfaces smaller specialist institutions when their content answers a specific query better than a generic university page.

What if ChatGPT is citing inaccurate information about my school?

This is a separate issue from visibility and must be addressed in parallel. Publish a clearly structured, data-rich "About" page or "Fast Facts" page that contains accurate versions of all the data points where errors occur (founding year, student numbers, accreditations, TEF rating). Link this page from your homepage with EducationalOrganization Schema.org markup. The accurate on-page data, reinforced by Schema.org, gives the AI a higher-authority source to draw from. Flag persistent inaccuracies to OpenAI via their feedback mechanism and to Perplexity via their correction request process.

How does the OfS TEF rating affect AI visibility?

TEF ratings are cited by AI engines as an official UK quality signal — particularly Perplexity, which regularly pulls from the OfS register. A Gold or Silver TEF rating, clearly stated in your Schema.org markup and on your key pages with a direct link to your OfS register entry, acts as a corroborating authority signal that makes citations more likely. Institutions with TEF Gold that do not surface this prominently in their structured data are leaving a significant visibility signal unused.

Request a personalised demo

Related articles

Audit of a university's visibility on Perplexity with scoring grid
AI visibility

Perplexity school visibility: audit and optimisation guide

Optimising university content to be cited by ChatGPT and AI search engines
AI visibility

Content Cited by ChatGPT: How to Make Your University Unmissable

GEO monitoring dashboard tracking university visibility across AI search engines
AI visibility

GEO Monitoring: Track Your School's Visibility in AI Answers

Back to blog

GDPR · EU AI Act · EU hosting

skolbot.

SolutionPricingBlogCase StudiesCompareAI CheckFAQTeamLegal noticePrivacy policy

© 2026 Skolbot