skolbot.AI Chatbot for Higher Education
ProductPricingBlog
Free demo
Free demo
Guide to the EU AI Act for higher education institutions
Back to blog
Compliance8 min read

The EU AI Act and Higher Education: What Your Institution Needs to Know

Practical guide to the EU AI Act for higher education institutions. Risk classification, obligations, compliance timeline and what to demand from AI vendors.

Priya Sharma

Priya Sharma

EdTech & AI Compliance Consultant for Higher Education ยท 7 March 2026

Summarize this article with

ChatGPTChatGPTClaudeClaudePerplexityPerplexityGeminiGeminiGrokGrok

Table of contents

  1. The AI Act is coming into force โ€” and universities are in scope
  2. Risk classification: where does your institution stand?
  3. Unacceptable risk (prohibited)
  4. High risk (strict obligations)
  5. Limited risk (transparency obligations)
  6. Minimal risk (no specific obligations)
  7. The compliance timeline
  8. Concrete obligations per use case
  9. Admissions chatbot (limited risk)
  10. Automated candidate screening (high risk)
  11. AI plagiarism detection (high risk if it affects grading)
  12. How the AI Act interacts with GDPR
  13. 10-point compliance checklist
  14. Sanctions for non-compliance
  15. What universities should demand from AI vendors
  16. FAQ
  17. Is my admissions chatbot classified as high risk?
  18. Does the AI Act apply to non-EU universities that recruit in Europe?
  19. What is the relationship between the AI Act and GDPR for candidate data?
  20. How much time and budget should we plan for compliance?

The AI Act is coming into force โ€” and universities are in scope

The European Artificial Intelligence Act (AI Act, Regulation EU 2024/1689) is the world's first legal framework to regulate AI systems by risk level. Enforcement began in February 2025 for prohibited practices, and the obligations for high-risk systems โ€” some of which are used in education โ€” come into force in August 2026 (Source: Official Journal of the EU, Regulation 2024/1689, Art. 113).

For higher education institutions, this is not an abstract topic. The moment a university deploys an admissions chatbot, a candidate-scoring tool, an AI plagiarism detector or an algorithm that recommends study programmes, it is deploying an AI system within the meaning of the regulation. The question is not whether your institution is in scope โ€” it is. The question is which risk category your tools fall into.

Risk classification: where does your institution stand?

The AI Act classifies AI systems into four risk levels. Each level carries different obligations.

Unacceptable risk (prohibited)

Prohibited systems include general social scoring, subliminal manipulation, and exploitation of vulnerabilities. In the education context, a system that scored students based on their overall social behaviour (event attendance, social media participation) to determine admissions would be prohibited (Source: AI Act, Art. 5, paragraph 1).

This scenario sounds extreme, but some "holistic" candidate-scoring practices come close. If your admissions decision tool incorporates behavioural data unrelated to academic aptitude, have it audited.

High risk (strict obligations)

This is the most important category for higher education. Annex III of the regulation explicitly classifies as high risk AI systems used to "determine access to or admission to educational and vocational training institutions" (Source: AI Act, Annex III, point 3a).

Specifically, this covers:

  • Automated candidate-screening tools โ€” any system that filters, ranks or scores applications based on algorithmic criteria
  • AI plagiarism detectors that influence grading or academic assessment
  • Placement or orientation algorithms that determine access to specific programmes
  • Automated grading systems that produce or influence academic evaluations

The obligations for these systems are substantial: a risk management system, documented training data, technical transparency, human oversight, logging, and registration in the EU database.

Limited risk (transparency obligations)

Admissions and information chatbots fall into this category. The primary obligation is simple but non-negotiable: inform the user that they are interacting with an AI system (Source: AI Act, Art. 50, paragraph 1).

In practice, your chatbot must clearly state that it is an AI assistant, not a human. A message such as "I am an AI assistant for [Institution Name]. A human adviser is available on request" fulfils this obligation.

Also in this category:

  • AI-generated content systems (automated emails, programme descriptions)
  • Emotion recognition systems (tone analysis in video interviews โ€” a growing area)
  • Automatic translation tools for pedagogical content

Minimal risk (no specific obligations)

AI tools with no impact on fundamental rights: spell checkers, spam filters, timetabling optimisers. No regulatory obligations, though transparency best practices remain recommended.

The compliance timeline

Deadlines are staggered. Here are the dates that directly concern universities.

2 February 2025 โ€” Prohibitions take effect (unacceptable risk practices). If your institution uses a social scoring or manipulation system, it must already be deactivated.

2 August 2025 โ€” Obligations for general-purpose AI models (GPAI). This concerns model providers like OpenAI, Anthropic, Mistral โ€” not universities directly, but your AI tool vendors must comply. Demand a compliance declaration from your suppliers.

2 August 2026 โ€” Obligations for high-risk systems (Annex III). This is the critical date for universities. If you use an automated candidate-screening tool or automated grading system, it must be compliant by this date.

2 August 2027 โ€” Extension to high-risk systems regulated by sector-specific legislation.

5 months remain before high-risk obligations take effect. If your institution has not yet audited its AI tools, the timeline is tight but not insurmountable.

Concrete obligations per use case

Admissions chatbot (limited risk)

Obligations are proportionate and realistic.

Obligation 1 โ€” Transparency: the chatbot must identify itself as AI. A permanent banner or clear welcome message suffices. Users must not believe they are conversing with a human.

Obligation 2 โ€” Data processing information: in line with the UK GDPR and EU GDPR (which apply in parallel), data processing by the chatbot must be documented in the privacy policy. Our GDPR guide for student data details these obligations.

Obligation 3 โ€” Human contact option: the prospect must be able to request a human at any point. A "Speak to an adviser" button must be visible at all times.

Estimated compliance cost: near zero if your chatbot is already transparent. Allow 2 to 5 days to audit, document and adjust the interface if needed.

Automated candidate screening (high risk)

Obligations are significantly heavier.

The six obligations: (1) documented risk management (bias, discrimination, classification errors), (2) training data quality (representativeness, absence of historical bias), (3) complete technical documentation, (4) human oversight โ€” no admissions decision may be fully automated, (5) input/output logging (minimum 6 months), (6) registration in the EU database.

Estimated compliance cost: EUR 15,000 to 50,000 for a full audit, technical documentation and human oversight processes. This cost is primarily borne by the tool vendor, but the institution (as "deployer" under the regulation) also has obligations.

AI plagiarism detection (high risk if it affects grading)

If the tool directly influences grading or academic decisions, it falls into the high-risk category. AI detectors currently show a false-positive rate of 5 to 15% (Source: Stanford HAI, 2025). Human oversight is not optional โ€” it is a legal requirement.

How the AI Act interacts with GDPR

The AI Act does not replace the GDPR โ€” it adds to it. All processing of personal data by an AI system remains subject to GDPR (legal basis, minimisation, access rights). Article 22 of the GDPR already prohibits automated decisions with legal effects; the AI Act reinforces this protection with more detailed transparency and human oversight requirements. The two regulations are complementary โ€” GDPR compliance does not exempt an institution from AI Act compliance.

For a deeper dive into GDPR compliance specific to student data, see our dedicated guide.

10-point compliance checklist

  1. Inventory: list all AI tools (chatbot, CRM, scoring, plagiarism, recommendation)
  2. Classification: minimal, limited, high or unacceptable risk for each tool
  3. Vendor audit: demand compliance declaration and timeline from each supplier
  4. Chatbot transparency: AI identification + human contact option
  5. Human oversight: no admissions decision fully automated
  6. Technical documentation: complete dossier for high-risk systems
  7. Bias analysis: tests on gender, geographic origin, school type
  8. Privacy policy: integrate AI processing disclosures
  9. Training: admissions and academic teams on their obligations
  10. Annual review: AI Act audit aligned with GDPR review cycle

Sanctions for non-compliance

The AI Act provides graduated fines: up to EUR 35 million (7% of turnover) for prohibited practices, EUR 15 million (3%) for high-risk non-compliance, and EUR 7.5 million (1%) for inaccurate information. The reputational risk is at least as concerning: a university sanctioned for AI non-compliance undermines its credibility to train the next generation of digital professionals.

What universities should demand from AI vendors

As "deployers" under the regulation, institutions share responsibility. Demand from each vendor: a dated AI Act compliance declaration, risk classification with justification, accessible technical documentation, contractual commitment to human oversight and transparency, a documented bias audit, and a regulatory update plan.

For insight into how AI influences university visibility beyond compliance, our article on AI recommendation criteria for universities explores the topic in depth.

FAQ

Is my admissions chatbot classified as high risk?

No, unless it makes autonomous admissions decisions. A chatbot that informs, answers questions and qualifies prospects is classified "limited risk." It must identify itself as AI and offer human access, but is not subject to the heavy high-risk obligations. However, if the chatbot automatically decides to accept or reject a candidacy, it crosses into high risk.

Does the AI Act apply to non-EU universities that recruit in Europe?

Yes. The regulation applies to any AI system whose outputs are used within the EU, regardless of where the provider or deployer is established. A UK or Swiss university using a scoring tool to select candidates residing in the EU is subject to the AI Act.

What is the relationship between the AI Act and GDPR for candidate data?

Both regulations apply simultaneously. GDPR governs the collection, storage and processing of personal data. The AI Act adds obligations on how that data is used by AI systems (transparency, human oversight, bias auditing). GDPR compliance does not exempt from AI Act compliance, and vice versa.

How much time and budget should we plan for compliance?

For a university using a chatbot (limited risk) and a CRM with basic scoring: 2 to 4 weeks and EUR 3,000 to 8,000 in auditing and adjustments. For a university using an automated screening system (high risk): 2 to 4 months and EUR 15,000 to 50,000, with a significant portion charged by the tool vendor.

Related articles

Operational guide to protecting prospect student data under GDPR
Compliance

Protecting prospect student data: an operational GDPR guide for admissions teams

GDPR guide for student data protection in higher education institutions
Compliance

GDPR and student data: complete guide for schools

Isometric illustration of an automated student recruitment funnel with integrated human touchpoints, terracotta palette
Recruitment

Automate Student Recruitment Without Losing the Human Touch

Back to blog

GDPR ยท EU AI Act ยท EU hosting

skolbot.

SolutionPricingLegal noticePrivacy policy

ยฉ 2026 Skolbot