A structured RFP eliminates 80% of selection mistakes
Most universities choose their chatbot after a 30-minute demo and a pricing negotiation. Six months later, the tool gives irrelevant answers, nobody checks the analytics, and the admissions team goes back to the contact form.
The problem is not the chatbot. It is the absence of a proper specification document. Without formalised criteria, every stakeholder evaluates the solution against their own priorities — IT looks at integration, the admissions director wants leads, the VP Finance compares prices. The result is a decision by default, not by method.
This guide provides the 12 criteria to include in your chatbot RFP, organised into four blocks: functional, technical, compliance and support. Each criterion includes a concrete acceptance threshold and a recommended weighting for the evaluation grid.
The benchmarks cited come from the analysis of 200,000 chatbot sessions across 50 partner institutions between October 2025 and February 2026 (source: Skolbot internal data).
The 12-point checklist: overview
Before diving into the detail, here is the full grid. Each criterion is grouped by block and weighted according to its impact on student recruitment.
| # | Block | Criterion | Weight |
|---|---|---|---|
| 1 | Functional | Training on institution-specific data | 15% |
| 2 | Functional | Native multilingual support | 12% |
| 3 | Functional | Automatic open house event registration | 10% |
| 4 | Functional | Analytics and reporting | 8% |
| 5 | Technical | CMS / CRM integration | 10% |
| 6 | Technical | Deployment timeline | 8% |
| 7 | Technical | Uptime SLA | 5% |
| 8 | Technical | Performance and response time | 5% |
| 9 | Compliance | PIPEDA and data hosting | 10% |
| 10 | Compliance | AI transparency obligations | 5% |
| 11 | Support | Onboarding and training | 7% |
| 12 | Support | Support SLA and dedicated CSM | 5% |
The total reaches 100%. Adjust the weightings to match your institution's priorities — but do not remove any criterion. A chatbot that excels functionally but fails on compliance exposes the university to real legal risk.
Functional requirements: what the chatbot must do
1. Training on institution-specific data (15%)
The chatbot must answer questions specific to your institution, not sector generalities. Analysis of 12,000 Skolbot conversations (Sept 2025 — Feb 2026) reveals that 89% of prospects ask about tuition fees and 78% about co-op programs. A chatbot that does not know your fees or your co-op offering fails on the most frequent questions.
Acceptance threshold. The chatbot must correctly answer 90% of the top 10 prospect questions (fees, career outcomes, co-op placements, residence, international exchanges, admission requirements, internships, credential recognition, campus life, financial aid) within 48 hours of deployment.
Question to ask the vendor: "How is the chatbot fed content? Automatic scraping, manual import, or both? What is the update lag when a program changes?"
2. Native multilingual support (12%)
58% of international prospects are not native speakers of the institution's primary language (source: language detection, 8,500 Skolbot conversations, 2025-2026). A monolingual chatbot cuts access to more than half of the international pipeline. In Canada, bilingual support in English and French is a baseline expectation for many institutions, particularly those serving students across provincial boundaries.
Acceptance threshold. Automatic language detection, response in the same language, coverage of at least English, French, and 8 additional languages without quality degradation.
Common trap. "Automatic translation" is not "native multilingual". A chatbot that translates its English response into French produces approximate content and misses the nuances of local education pathways (CEGEP in Quebec, college diplomas in Ontario, transfer agreements in British Columbia).
3. Automatic open house event registration (10%)
The chatbot must detect visit intent and offer registration within the conversation, not simply link to a form. Tracking data across 35 institutions (2025-2026) shows an open house event registration rate of 18.4% via chatbot versus 6.2% via form — a 3x factor.
Acceptance threshold. In-conversation registration (no external redirect), instant confirmation, personalised reminders at D-7 and D-1 with a no-show rate below 20%. For reference, the no-show rate without any reminder reaches 52% (source: tracking of 4,200 open day registrations across 12 institutions, 2025-2026).
4. Analytics and reporting (8%)
Without data, the chatbot is a black box. The dashboard must provide at minimum: conversation volume, top questions, resolution rate, human handoff rate, and conversions (open house events, forms, applications).
Acceptance threshold. Dashboard accessible without technical skills, CSV/API export, segmentation by program/campus/language, and alerts on anomalies (spike in questions on a topic = problem on the site or program change).
Technical requirements: how the chatbot integrates
5. CMS / CRM integration (10%)
The chatbot must integrate with your existing ecosystem, not replace it. Critical integrations: CMS (WordPress, Drupal, headless), CRM (HubSpot, Salesforce, Colleague by Ellucian, PeopleSoft), and marketing automation tools.
Acceptance threshold. JavaScript snippet for the CMS (deployment without a developer), webhook or REST API for the CRM (real-time lead synchronisation), and complete technical documentation.
Question to ask: "Does your chatbot push leads into our CRM in real time or in batch? Which fields are synchronised?"
6. Deployment timeline (8%)
The seasonality of student recruitment makes timing critical. A chatbot deployed after the OUAC deadline (January) or after late admissions (August) has missed its value window.
Acceptance threshold. Under 2 weeks from contract signature to production, including training on institution content. Education-specialist solutions achieve 48 hours; generic solutions require 4 to 8 weeks of configuration.
7. Uptime SLA (5%)
67% of prospect activity occurs outside office hours, peaking on Sunday evenings (source: 200,000 Skolbot sessions, 2025-2026). A chatbot that goes down at weekends cancels the main competitive advantage.
Acceptance threshold. SLA of 99.9% minimum (under 8h45 of downtime per year), with real-time monitoring and alerts.
8. Performance and response time (5%)
Acceptance threshold. Response time below 5 seconds for 95% of queries. Field data shows a median of 3 seconds for education-specialist AI chatbots, versus 47 hours for email and 72 hours for contact forms (source: mystery shopping audit across 80 institutions, 2025).
Compliance: what the law requires
9. PIPEDA and data hosting (10%)
Any chatbot that collects prospect data — including data from minors — must comply with PIPEDA (Personal Information Protection and Electronic Documents Act). In Quebec, Loi 25 imposes additional obligations including privacy impact assessments and enhanced consent requirements. Provincial privacy legislation in Alberta (PIPA) and British Columbia (PIPA BC) may also apply.
Acceptance threshold. Data hosted in Canada, signed DPA (Data Processing Agreement), accessible processing records, operational right to erasure within 72 hours, and explicit consent before any data collection. The Office of the Privacy Commissioner publishes specific guidance on AI and data protection relevant to education.
Critical question: "Where is conversation data hosted? Who has access? What is the deletion process upon request?"
10. AI transparency obligations (5%)
Canada's proposed Artificial Intelligence and Data Act (AIDA) and the Treasury Board Directive on Automated Decision-Making set transparency expectations for AI systems. While AIDA's full enforcement timeline is evolving, best practice for Canadian institutions requires clear disclosure when prospects interact with AI and documented human oversight mechanisms.
Acceptance threshold. Explicit notice "You are chatting with an AI assistant" at the start of every conversation, accessible technical documentation of the AI system, and a mechanism to transfer to a human at any time.
Support: what makes the difference after signing
11. Onboarding and training (7%)
A high-performing chatbot poorly configured produces the same results as a mediocre chatbot. Onboarding must include: assisted initial setup, admissions team training, and content validation before go-live.
Acceptance threshold. Dedicated training session (not a generic webinar), joint validation of the chatbot on the 20 most frequent questions, and customised internal documentation.
12. Support SLA and dedicated CSM (5%)
Acceptance threshold. Support response time below 4 hours on business days, a dedicated CSM (Customer Success Manager) with education sector knowledge, and quarterly performance reviews with optimisation recommendations.
Evaluation grid: the ready-to-use template
Use this matrix to score each candidate solution. Each criterion is rated from 1 (insufficient) to 5 (excellent), then multiplied by its weight.
| Criterion | Wt. | Solution A | Solution B | Solution C |
|---|---|---|---|---|
| 1. Institution-specific training | 15% | _/5 x 0.15 = _ | _/5 x 0.15 = _ | _/5 x 0.15 = _ |
| 2. Native multilingual | 12% | _/5 x 0.12 = _ | _/5 x 0.12 = _ | _/5 x 0.12 = _ |
| 3. Open house auto-registration | 10% | _/5 x 0.10 = _ | _/5 x 0.10 = _ | _/5 x 0.10 = _ |
| 4. Analytics | 8% | _/5 x 0.08 = _ | _/5 x 0.08 = _ | _/5 x 0.08 = _ |
| 5. CMS/CRM integration | 10% | _/5 x 0.10 = _ | _/5 x 0.10 = _ | _/5 x 0.10 = _ |
| 6. Deployment timeline | 8% | _/5 x 0.08 = _ | _/5 x 0.08 = _ | _/5 x 0.08 = _ |
| 7. Uptime SLA | 5% | _/5 x 0.05 = _ | _/5 x 0.05 = _ | _/5 x 0.05 = _ |
| 8. Response time | 5% | _/5 x 0.05 = _ | _/5 x 0.05 = _ | _/5 x 0.05 = _ |
| 9. PIPEDA compliance | 10% | _/5 x 0.10 = _ | _/5 x 0.10 = _ | _/5 x 0.10 = _ |
| 10. AI transparency | 5% | _/5 x 0.05 = _ | _/5 x 0.05 = _ | _/5 x 0.05 = _ |
| 11. Onboarding | 7% | _/5 x 0.07 = _ | _/5 x 0.07 = _ | _/5 x 0.07 = _ |
| 12. Support / CSM | 5% | _/5 x 0.05 = _ | _/5 x 0.05 = _ | _/5 x 0.05 = _ |
| TOTAL | 100% | _/5 | _/5 | _/5 |
How to interpret the score. Below 3/5, the solution has structural gaps. Between 3 and 4, it works with trade-offs. Above 4, it covers the needs of a Canadian post-secondary institution.
For a detailed comparison of market solutions, see our AI chatbot comparison for higher education. To understand why chatbots outperform contact forms, read our chatbot vs form analysis. You can also explore all our head-to-head analyses on our comparison page.
FAQ
Who should write the chatbot RFP within the institution?
The specification should be co-authored by three parties: the admissions directorate (defining functional needs), IT (validating technical and integration constraints), and the privacy officer or legal team (ensuring PIPEDA, Loi 25, and provincial privacy law compliance). A steering committee of 3 to 5 people is sufficient. Involving too many stakeholders lengthens the process without improving the document quality.
How long does it take to write a chatbot RFP?
With this grid as a starting point, allow 2 to 3 weeks from kickoff to finalised document. The longest phase is not the writing — it is internal alignment on priorities (criterion weighting). Start with the summary grid from this article, adjust the weightings in committee, then detail the acceptance thresholds.
Should the RFP include a budget range?
Yes, include a budget range. This filters out solutions outside your scope and avoids wasting time on demonstrations with vendors 5 times above budget. For an education-specialist AI chatbot, the range is between $300 CAD and $1,200 CAD per month on a per-institution flat fee. Generic B2B solutions start at USD 2,500 per month. Including this information enables vendors to propose the most relevant offering.
Must the RFP reference Canadian AI legislation?
Yes, explicitly. Canada's Artificial Intelligence and Data Act (AIDA) and the Treasury Board Directive on Automated Decision-Making set expectations for AI transparency and accountability. The RFP must require transparency compliance (disclosure that the user is interacting with AI) and verify that the proposed system meets current Canadian standards. The federal government's Responsible Use of AI framework also provides guidance on safe AI deployment in public-facing contexts including education.
How should you evaluate chatbot response quality during the trial?
Prepare a list of 30 real questions drawn from your exchanges with prospects (email, phone, social media). Submit them to the chatbot in test mode and evaluate each response on three axes: accuracy (is the information correct?), completeness (does the response cover the question?), and tone (is the response appropriate for a student prospect?). A score of 80% or above on the 30 questions indicates a viable solution. For further guidance on return on investment, see our student chatbot ROI guide.
Test Skolbot on your institution in 30 seconds


