AI Safety
Our Commitment
​
Every learner deserves a guide that shows them future options and helps them get where they want to go.
​
CampusEvolve supports high school students, out-of-school youth, and adult learners as they navigate pathways to vibrant careers. We are developing a unique AI tool, based in natural language, that helps them explore options, apply for financial aid, enroll in programs, and find resources to support their wellbeing along the way.
Our AI system is designed so it does not steer learners toward or away from opportunities based on who they are or where they come from. It draws from verified Washington State education, training, and career information — not just general internet content. And the people most affected by this technology — learners and the advisors who support them — are co-designers in how it works and how it can get better.
Because many of the people we serve are minors — and many come from communities that have historically faced barriers to opportunity — we hold ourselves to rigorous standards of AI safety, fairness, data privacy, and responsible use, including bias monitoring, strong data protections, and ongoing evaluation of how our system performs for learners.
Learners always know they are interacting with AI, not a person. The system provides guidance and suggestions, and it is designed to work alongside and support educators, advisors, and counselors.
​
CampusEvolve’s AI Safety work is done in partnership with the Washington Student Achievement Council. WSAC is involved with two independent AI safety tests of our system — one before pilot launch and one in mid-2026. These tests, which involve student and expert advisor ‘red-teaming’ of the system, are contractual commitments, not optional reviews.
​
The first section of our AI Safety Overview below describes the “Why this Matters” rationale for our safety approach. The second section provides detailed technical information about “How We Do It.” We can share further technical information with those interested in a deeper dive.
Why this Matters: our AI Safety Approach
Answers Grounded in Real Programs
When a learner asks about financial aid or a training program, they deserve a real answer — not a confident-sounding guess.
​
Our AI draws from verified sources: real colleges, real financial aid, real career pathways. Every AI response is grounded in a curated knowledge base of verified Washington State educational resources, financial aid programs, career data, and support services rather than generating answers from general training data alone. As we begin to work outside of Washington, our data will include that state or region’s information.
​
We Screen Every Interaction to Keep Learners Safe
Every message is screened instantly and automatically. Anything that looks safe moves through in milliseconds. Anything that doesn't gets a deeper review from multiple independent safety evaluations running in parallel.​
​
Your Information Stays Yours
We collect only what is needed to help a learner find their next step. We don't sell it, share it, or use it for advertising. If a learner accidentally shares something sensitive — such as a Social Security number or home address — our system detects it automatically and applies protective safeguards.
We do not build behavioral profiles of learners for advertising, marketing, or unrelated purposes. Any analytics we perform are used only to improve safety, system performance, and learner support.
CampusEvolve's AI complies fully with FERPA and COPPA, federal laws that protect learner data privacy. See how we do this below.
​Every Learner Sees the Full Range of What's Possible
​
This is a core design principle, not a feature. A learner in Yakima and a learner in Seattle see the same range of options — unless a learner proactively requests local resources. The AI doesn't make assumptions about which pathway options they're capable of pursuing, and it adapts every response to the learner's age, reading level, and context.
​
Federal civil rights law — including Title VI and Title IX — prohibits discriminatory outcomes in education systems, including automated ones. We take that obligation seriously. Our bias detection evaluates every AI response for demographic assumptions, pathway bias, cultural assumptions, and language that might assume a learner's starting point.
​
Infrastructure Designed for Reliability and Safety
​
Our platform runs on Microsoft Azure with enterprise-grade encryption, and our safety systems are designed to be failsafe.
​
Independent Safety Testing with Real Learners
We partner with Humane Intelligence, an independent AI safety organization specializing in bias and harm evaluation, to conduct red-team testing of the platform.
​
​
How Our AI Safetey Works Technically
CampusEvolve’s AI Guide helps learners explore careers, understand post-secondary options, and take concrete next steps. Every AI response is grounded in Retrieval-Augmented Generation (RAG) — meaning the AI draws from a curated knowledge base of verified Washington State educational resources, financial aid programs, career data, and support services rather than generating answers from general training data alone.
​
Our knowledge base includes content from sources such as Washington Career Bridge, O*NET career data, FAFSA guidance, WA State labor market information, college program catalogs, and community organizations. Content is reviewed and curated by our team before it enters the system.
Multi-Layer AI Safety Pipeline
Every learner message passes through a multi-layer safety evaluation system before and during AI response generation.
​
Layer 1: Fast Classification of Messages
Incoming messages are immediately screened by a fast classifier that combines machine learning models with Azure AI Content Safety. This layer evaluates content across four categories:
-
Hate speech and harassment
-
Violence and threats
-
Sexual content
-
Self-harm indicators
Messages that are clearly safe pass through in milliseconds. Messages with elevated risk scores are escalated to Multi-Provider AI Evaluation and human oversight.
Layer 2: Multi-Provider AI Evaluation
When the fast classifier detects ambiguous or elevated-risk content, a second evaluation layer activates. This layer uses multiple independent AI providers running in parallel to assess the message across detailed safety dimensions:
-
Immediate risk — Is there an urgent safety concern?
-
Self-harm signals — Are there direct or indirect indicators of self-harm or suicidal ideation?
-
Abuse and harassment — Does the message contain hate speech or directed harassment?
-
Violence — Are there threats or references to violence?
Each provider independently scores the message. Scores are aggregated to produce a consensus-based risk assessment, reducing the chance that any single model’s blind spot creates a gap in safety coverage.
​
Risk-Based Response Actions
Based on the evaluated risk level, the system takes one of the following actions:
​
​Risk: Low​
Action: Message proceeds normally
​
​Risk: Moderate​
Action: Message is logged for monitoring and trend analysis
​
​Risk: High​
Action: An alert is generated for review
​
​Risk: Critical​
Action: The conversation is immediately redirected to crisis resources, such as human advisors, Suicide & Crisis hotlines, and other appropriate and vetted responders
​
When a critical safety concern is detected, the AI responds with empathy and connects the learner directly to professional support resources — it does not attempt to provide counseling.
​
Privacy and Compliance
FERPA and COPPA
CampusEvolve is designed with student privacy regulations at its core:
-
FERPA (Family Educational Rights and Privacy Act): We do not collect, store, or share protected educational records. Our AI Guide works with information learners voluntarily share during their session.
-
COPPA (Children’s Online Privacy Protection Act): We implement age-appropriate safeguards and do not collect personal information from children under 13 without appropriate consent mechanisms.
Our system includes a dedicated Privacy Compliance evaluator that screens every message for potential personally identifiable information (PII) exposure. It classifies detected PII by sensitivity level:
-
High sensitivity: Social Security numbers, home addresses, health records, family financial information
-
Medium sensitivity: Email addresses, phone numbers, student IDs, specific grades or test scores
-
Low sensitivity: Grade level, school district, general academic interests
When high-sensitivity PII is detected, the system flags the interaction and applies appropriate data handling procedures including data minimization and access controls.
Data Minimization
We follow a data minimization approach:
-
We collect only what is needed to provide pathway guidance
-
Sensitive identifiers are hashed before storage — we store a one-way hash, not the original value
-
Student data is isolated per user — no learner can access another learner’s information
-
We do not sell, share, or use learner data for advertising
Authentication and Access Control
All access to the CampusEvolve platform requires authentication. We use industry-standard JWT (JSON Web Token) authentication with RS256 cryptographic signing through Auth0, a leading identity management platform. Service-to-service communication within our infrastructure is secured with separate API key authentication.
Content Quality and Integrity Grounded in Real Data
Unlike general-purpose AI chatbots, CampusEvolve’s AI is grounded in curated, Washington State-specific content relevant to pathways navigation. Our RAG architecture means responses reference real education and training programs, real financial aid opportunities, and real career data — not hallucinated information.
Our knowledge base is organized into five specialized domains: - Education and Training — vocational training programs, 2- and 4-year degree programs, work-based credentials, and industry certifications - Financial Aid — scholarships, grants, FAFSA, College Bound Scholarship - Career Resources — career data, labor market information, local cost of living data - Supporting Services — community organizations, mentoring, housing, food assistance - Regional Resources — locally relevant opportunities and services.
Academic Integrity
Our system includes an Academic Integrity evaluator that detects and declines requests to complete assignments, write papers, or help learners cheat. CampusEvolve is designed to guide learners toward opportunities, while empowering them to do the work that drives their own growth.
Bias Detection
We actively monitor for bias in AI responses through a dedicated Bias Detection evaluator that checks for:
-
Demographic assumptions — avoiding stereotypes based on a learner’s race, ethnicity, gender, religion, or socioeconomic status
-
Pathway bias — ensuring the AI doesn’t steer learners toward or away from options based on perceived background
-
Cultural assumptions — identifying cultural references that may not translate across diverse communities
-
Ability assumptions — avoiding assumptions about a learner’s physical or cognitive capabilities
-
Diversity of options — ensuring the AI presents multiple pathways rather than funneling learners toward a single option
-
Language complexity — keeping responses accessible, targeting a 10th-grade reading level and flagging unnecessary jargon
Age-Appropriate Content
All AI responses are filtered through an Age-Appropriate Content evaluator that adjusts content standards based on the learner’s age group, screening for profanity, adult content, and violence to ensure every interaction is suitable for our learner audience.
​
Infrastructure Security Cloud Architecture
CampusEvolve runs on Microsoft Azure, leveraging enterprise-grade cloud security:
-
Encryption in transit: All data transmitted between users, our services, and our databases is encrypted using TLS 1.2 or higher
-
Encryption at rest: All stored data is encrypted using Azure-managed encryption
-
Infrastructure as Code: Our entire infrastructure is defined and version-controlled using Terraform, ensuring consistent, auditable, and reproducible deployments
-
Network security: Database access is restricted through network security groups and authentication requirements
-
Monitoring: Centralized logging and alerting through Azure Log Analytics enables us to detect and respond to issues quickly
Resilient Design
Our safety systems are designed to fail safe:
-
If the fast classifier is unavailable, a conservative fallback score triggers the deeper evaluation layer
-
If the multi-provider AI evaluation fails, the system defaults to a cautious response posture rather than allowing potentially unsafe content through
-
If the content moderation service is temporarily unreachable, messages are held for review rather than passed through unchecked
Independent Third-Party Evaluation
We believe AI safety claims should be validated externally, not just internally. CampusEvolve partners with Humane Intelligence, an independent AI safety organization, to conduct structured red-teaming of our AI system.
​
What This Involves
Humane Intelligence is co-designing and leading two virtual AI red-teaming workshops in which real learners — CampusEvolve’s own users — test the system for vulnerabilities, harmful outputs, and edge cases. This includes:
-
A formal harm taxonomy defining the categories of risk being tested
-
An adjudication rubric establishing what constitutes an acceptable vs. unacceptable AI response
-
Structured exploit and vulnerability testing designed by AI safety experts
-
Learner and advisor participation — the people most affected by the AI’s behavior are directly involved in evaluating its safety
-
Anonymous participation options to encourage honest, uninhibited testing
Learners and advisors interact with the CampusEvolve AI through Humane Intelligence’s dedicated red-teaming platform, where they can identify and report risks in a structured way. Humane Intelligence provides pre-event training so participants understand the methodology and how to test effectively.
​
Why This Matters
Internal testing, no matter how thorough, has blind spots. Independent red-teaming by an external organization with learner participants provides:
-
Unbiased assessment — external evaluators aren’t influenced by internal assumptions
-
Real-world testing — learners interact with the system the way actual users would, uncovering edge cases that synthetic testing misses
-
Learner voice — centering the people the AI is designed to serve
-
Established methodology — Humane Intelligence brings proven frameworks for AI harm evaluation
-
Actionable findings — results directly inform improvements to our safety pipeline, prompts, and knowledge base
After each workshop, Humane Intelligence shares a post-event report with all findings and data, along with recommendations for building ongoing internal red-teaming capabilities.
Continuous Improvement
AI safety is not a one-time implementation — it’s an ongoing practice.
Internal Testing and Review
We continuously:
-
Test our safety pipeline with structured test suites covering unsafe academic questions, emotional support scenarios, self-harm indicators, violence, harassment, academic misconduct, and edge cases
-
Review AI response quality with subject matter experts in education, training, and learner advising
-
Update our knowledge base with current, verified Washington State resources
-
Refine our prompts based on real learner interaction patterns and expert feedback
-
Monitor for emerging risks and adapt our safety evaluations accordingly
Our safety test suite includes baseline regression testing to ensure that updates to any component of the system don’t degrade our ability to detect and respond to safety concerns.
Learner Journey Analysis
We regularly analyze how learners actually use the platform — where they engage, where they drop off, and where the AI’s responses fall short. This analysis is guided by a formal quality rubric that evaluates every AI response on four dimensions:
-
Relevance — Does the response advance the learner’s post-secondary exploration?
-
Grounding — Is the response based on real post-secondary options and verified data?
-
Actionability — Does the response include a concrete next step the learner can take?
-
Readability — Is the response clear and accessible for the learner’s reading level?
Findings from learner journey analysis and expert review feed directly back into prompt improvements, knowledge base updates, and safety pipeline refinements.
Questions or Concerns
If you have questions about our AI safety practices, or if you’d like to report a concern, please contact us at safety@campusevolve.ai.
​
Last updated: March 2026