← Back to News

Navigating AI detection in recruitment: What every HR professional needs to know

4 June 2025

As AI tools increasingly find their way into recruitment processes, HR professionals face a complex balancing act: how to harness the benefits of automation while staying on the right side of employment and data protection law. One area gaining rapid traction - and scrutiny - is the use of AI detection software to flag job applications that may have been written by generative AI tools like ChatGPT.

While these tools offer intriguing promise, they also present real risks. Here's what HR teams need to know to stay compliant, fair, and effective when deploying AI detection in recruitment.

AI detection tools: useful, but not the final word

Software like GPTZero, Copyleaks, or Turnitin AI detection can identify applications that seem AI-generated. But their outputs are not definitive. Detection tools often flag:

  • Well-written but formulaic CVs

  • Submissions by non-native English speakers

  • Neurodivergent candidates with unique communication styles

Relying solely on AI detection can therefore lead to unjust candidate exclusions - and potential legal liability under the Equality Act 2010 or UK GDPR.

Key takeaway: Use AI detection tools only as one input among many, always followed by human verification.

Legal essentials: data protection and GDPR compliance

Using AI tools in recruitment engages the UK GDPR—and failing to follow it can result in regulatory scrutiny.

Here’s what’s required by law:

  • Transparency: Your privacy notice must explicitly say if and how AI detection is used.

  • DPIA (Data Protection Impact Assessment): Required before implementing any AI tool that could significantly affect individuals’ rights.

  • Contracts: Make sure vendor agreements clearly assign controller/processor roles and outline compliance responsibilities.

Practical tips:

  • Update your privacy notice to explain AI usage, including logic and consequences

  • Conduct and document a DPIA before rolling out AI tools

  • Ensure your contract with AI vendors includes GDPR safeguards

Contestability: don’t let AI make the final call

Under UK GDPR Article 22, individuals have the right not to be subject to automated decisions that significantly affect them - unless meaningful human review is available.

That means:

  • Never reject candidates solely based on AI detection scores

  • Offer a clear appeal route

  • Ensure an HR professional reviews any flagged application before a final decision

This is not just a compliance issue - it’s a fairness and brand reputation issue.

Avoiding discrimination: Equality Act risks

AI detection tools can unintentionally discriminate against candidates with protected characteristics, such as:

  • Neurodivergent applicants

  • Individuals whose first language is not English

  • Those using accessible writing tools or templates

This can amount to indirect discrimination or failure to make reasonable adjustments, both of which are unlawful under the Equality Act 2010.

Best practice:

  • Regularly audit detection tools for bias

  • Offer reasonable adjustments and alternative assessment routes

  • Train HR staff on recognising and correcting bias in automated outputs

Transparency and ongoing monitoring

Compliance isn’t a one-time task—it’s an ongoing obligation. HR must commit to:

  • Clearly informing candidates when AI tools are used

  • Keeping records of how decisions were made

  • Regularly reviewing AI tools for accuracy, bias, and legal compliance

Candidates are increasingly savvy - and regulators are watching. Proactive transparency builds trust and reduces risk.

Checklist for HR professionals

Before using AI detection tools in recruitment, make sure you:

  • Use detection tools only as a supporting aid, not a decision-maker

  • Update privacy policies to reflect AI and detection tool use

  • Conduct a DPIA and update it regularly

  • Ensure candidates can challenge decisions and access human review

  • Audit AI tools for discrimination risks and bias

  • Review contracts with AI vendors for GDPR compliance

  • Train staff on the legal and ethical use of AI in hiring

Conclusion: thoughtful use, not blind trust

AI detection can help filter high volumes of applications - but it’s not a substitute for human judgment. HR professionals must lead the way in responsible, lawful, and inclusive use of AI in hiring.

Used wisely, AI can support better decisions. Used carelessly, it can lead to discrimination claims, reputational harm, and serious compliance failures.

The future of recruitment isn’t just AI-powered - it’s HR-guided.

This article was created with insights from Lex HR - your always-on HR legal assistant. Lex HR helps HR professionals navigate complex employment law with confidence, providing real-time, reliable advice tailored to your needs. Try it free today and see how much easier compliance can be.