AI Compliance in UK Further Education

Basingstoke College of Technology (BCoT) identified an opportunity to apply AI to support staff wellbeing and enhance assessment consistency.
The key objectives were to reduce the time teachers spent marking outside working hours and to bring greater standardisation to assessment and feedback. The solution was explicitly positioned as a teacher support tool, not a student-facing system and not a mechanism for directive or automated feedback to learners.
The project’s success depended not only on capability and adoption, but also on ensuring the approach was compliant, risk-aware and defensible in the context of assessment.
Operating within the UK further education sector, BCOT sits at a point where regulatory expectations are particularly stringent due to the age of learners (predominantly under 18), oversight by education governing bodies (e.g. JCQ), high sensitivity around assessment integrity, strict expectations regarding automated decision-making and the application of UK GDPR.
Our team supported BCOT in navigating the legal, regulatory and ethical considerations associated with implementing an AI-enabled marking assistant Fixter.
Important note: Perform Partners is not a legal firm and does not provide legal advice. We worked collaboratively with BCOT stakeholders to align solution design with legal, compliance, and governance requirements.
As part of our delivery approach, we identified the key compliance concerns and the practical steps needed to address them, in order to demonstrate a responsible, enterprise-grade approach to AI adoption in education.
1. Data Protection and UK GDPR
2. Automated Decision-Making
Under UK GDPR, individuals have rights related to decisions made solely by automated means. In an education context, this introduces heightened risk. Key concern was whether marks or assessment outcomes could be considered “solely automated decisions”.
3. Children and Vulnerable Data Subjects
Even at the secondary/further education level, learners are treated as a protected group. Key implications included:
4. Academic Integrity and Directive Feedback
Education regulators draw a clear line between assessment and learning support. To comply, it was important to avoid AI-driven feedback that could be interpreted as assisting students in improving assessed work.
5. Model Training and Intellectual Property
BCOT required assurance that:
1. Anonymisation and Redaction
2. Retrieval-Augmented Generation (RAG)
3. Human-in-the-Loop Design
4. Strict Functional Boundaries
5. Governance Alignment