Bias in, bias out: how artificial intelligence (AI) recruitment can cost you

Scenario

Let us imagine a scenario.

Victoria, a senior recruiter at a healthcare charity, is proud of her organisation’s progressive values. Pressed for time, she signs off on a new AI-powered tool that screens CVs and ranks candidates by ‘fit’. The provider says it uses anonymised data and ‘learns’ from successful hires.  So far, so good.

Then, Victoria’s team notices the shortlists all look suspiciously the same. No one flags it. Six months later, a candidate requests their data under UK General Data Protection Regulation (GDPR), and all hell breaks loose.

What went wrong? Plenty

Victoria assumed the tool was neutral. It wasn’t. It had been trained on biased historical data. It inferred ethnicity and gender from names to ‘improve’ its fairness algorithm, without informing candidates. Worse, the AI provider claimed to be just a processor, washing its hands of responsibility. Victoria’s team had not done a proper Data Protection Impact Assessment (DPIA), checked for bias mitigation, or clarified who was responsible for transparency. The Information Commissioner’s Office (ICO) would have a field day.

What is the legal (and practical) problem?

AI tools in recruitment are often sold as efficient, consistent and scalable. But the ICO’s November 2024 audit report reads like a warning label. It found tools that:

  • inferred protected characteristics like gender and ethnicity from names or application data
  • let recruiters filter out candidates based on those inferred traits
  • collected far more personal data than needed, including scraped content from social media
  • had little to no bias testing, or relied on incomplete or made-up datasets
  • operated without lawful basis, clear privacy information, or valid contracts.

In other words: many AI recruitment tools are privacy minefields, and discrimination claims waiting to happen.

Recruiters and the HR professionals advising them cannot outsource legal accountability. If you are using AI, the data protection risks are yours to own. That includes liability for unfair or discriminatory outcomes, breaches of UK GDPR, and reputational fallout.

The law: no exemption for cool technology

The UK GDPR applies fully to personal data processed by AI including when it is used to screen, assess or select candidates. That means:

Fairness

You must monitor for bias in AI outputs. ‘Better than random’ is not good enough if the AI’s decisions affect people’s rights.

Transparency

Candidates must know how their data are used, what is inferred about them, and how the tool works (in plain English).

Data minimisation

Only use the personal data needed and no more. ‘Scraped because we can’ does not cut it.

Controller/processor clarity

Many vendors claim to be processors, but act like controllers. If they use your candidates’ data to train their model, they’re probably a controller (and non-compliant if they do not say so).

Lawful basis

Inferring special category data (such as race or health) without explicit consent and a valid condition? That is likely to be unlawful.

Ensure your AI tool incorporates meaningful human input into recruitment decisions.

What should you do?

Audit your AI tools now

Do not wait for a complaint or an ICO letter. Ask vendors:

  • What data does the tool collect or infer?
  • How do they test for bias and accuracy, and how often?
  • Do they use your candidate data to train models shared with others?
  • Are they a processor or controller (in practice, not just on paper)?

Review your DPIAs

Using AI in recruitment is almost always high-risk. Your DPIA should:

  • cover fairness, bias, and impact on candidates
  • include a clear map of data flows and party responsibilities
  • be reviewed when tools are updated or when your use changes.

Fix your contracts

Contracts should spell out:

  • Who provides privacy information to candidates.
  • What data are collected, for what purpose, and by whom.
  • That the AI provider will not repurpose candidate data without your say-so.

Train your hiring teams

Recruiters must understand how to interpret AI outputs and when to override them. If your managers treat ‘fit scores’ as gospel, you have a problem.

Do not infer, ask

If you want to monitor for bias, collect demographic data via an optional candidate survey. Do not guess race or gender from names. That is not clever, it is discriminatory.

Be ready to explain

Publish privacy information that is specific, accurate, and understandable. If your tool affects someone’s job chances, they deserve more than a vague clause buried in a privacy policy.

Final thought

AI can streamline recruitment but only if you stay in the driving seat. If you do not know how your tool works, what it is learning, or what it is doing with your data, then you are not using AI. You are being used by it.

And that is not innovation. It is liability.

Source: AI tools in recruitment | ICO