Case Study

AI Compliance in UK Further Education

Change SquadsTechnical Transformation and ModernisationRegulatory Compliance Navigation

The Challenge

Basingstoke College of Technology (BCoT) identified an opportunity to apply AI to support staff wellbeing and enhance assessment consistency.

The key objectives were to reduce the time teachers spent marking outside working hours and to bring greater standardisation to assessment and feedback. The solution was explicitly positioned as a teacher support tool, not a student-facing system and not a mechanism for directive or automated feedback to learners.

The project’s success depended not only on capability and adoption, but also on ensuring the approach was compliant, risk-aware and defensible in the context of assessment.

Operating within the UK further education sector, BCOT sits at a point where regulatory expectations are particularly stringent due to the age of learners (predominantly under 18), oversight by education governing bodies (e.g. JCQ), high sensitivity around assessment integrity, strict expectations regarding automated decision-making and the application of UK GDPR.

Our team supported BCOT in navigating the legal, regulatory and ethical considerations associated with implementing an AI-enabled marking assistant Fixter.

Important note: Perform Partners is not a legal firm and does not provide legal advice. We worked collaboratively with BCOT stakeholders to align solution design with legal, compliance, and governance requirements.

Our Approach

As part of our delivery approach, we identified the key compliance concerns and the practical steps needed to address them, in order to demonstrate a responsible, enterprise-grade approach to AI adoption in education.

Key Compliance Concerns Identified:

1. Data Protection and UK GDPR

  • Use of student assessment data, which may constitute personal data
  • Requirements around data minimisation, purpose limitation, and retention
  • Expectations around data residency (UK-based storage preferred)
  • Prohibition on unnecessary processing of identifiable student data

2. Automated Decision-Making

Under UK GDPR, individuals have rights related to decisions made solely by automated means. In an education context, this introduces heightened risk. Key concern was whether marks or assessment outcomes could be considered “solely automated decisions”.

3. Children and Vulnerable Data Subjects

Even at the secondary/further education level, learners are treated as a protected group. Key implications included:

  • No profiling of learners
  • Clear opt-out mechanisms
  • Conservative interpretation of acceptable AI use

4. Academic Integrity and Directive Feedback

Education regulators draw a clear line between assessment and learning support. To comply, it was important to avoid AI-driven feedback that could be interpreted as assisting students in improving assessed work.

5. Model Training and Intellectual Property

BCOT required assurance that:

  • Their data would not be used to train AI models
  • No proprietary teaching or assessment material would be absorbed into third-party systems

How the Concerns Were Addressed:

1. Anonymisation and Redaction

  • All student work was anonymised prior to processing
  • Personal identifiers were removed before data entered the AI pipeline
  • Output was also checked to prevent accidental reintroduction of personal data

2. Retrieval-Augmented Generation (RAG)

  • The solution used inference-only techniques
  • No model training on BCOT data
  • Documents were used transiently to support marking logic

3. Human-in-the-Loop Design

  • AI outputs were advisory, not determinative
  • Final marking decisions remained with teachers
  • This avoided classification as fully automated decision-making

4. Strict Functional Boundaries

  • No student-facing outputs
  • No guidance on “how to improve” assessment responses
  • No generative feedback that could be construed as directive feedback

5. Governance Alignment

  • Solution design aligned with UK GDPR principles
  • Consideration given to EU AI Act risk-based thinking (despite UK non-adoption)
  • Explicit acknowledgement of sector-specific expectations
01
02
03

Your Education institute could take the same step. Ready to unlock growth? Let’s Talk