Artificial intelligence (AI) is now embedded in hiring and people management. But when algorithms go wrong, the legal and reputational fallout lands squarely on leaders.
Scenario
You deploy an AI CV screener to cut costs in early-stage hiring. The tool consistently downranks CVs with gaps longer than six months, catching many applicants who took time out for caring responsibilities. The pattern remains invisible to managers until a candidate asks why you rejected them. By then, the system has baked in a statistical disadvantage: an adverse impact for women and carers.
What is the law?
Under the Equality Act 2010, employers must avoid both direct discrimination (treating someone less favourably because of a protected characteristic) and indirect discrimination (rules or practices that put groups at a disadvantage without justification). An AI tool that disproportionately filters carers, women or disabled applicants risks both.
The General Data Protection Regulation (GDPR) applies to data.
Key points
Transparency
Candidates have the right to know if AI influenced decisions.
Automated decision-making rights
Article 22 of the GDPR restricts decisions made solely by machines where they have ‘legal or similarly significant effects’.
Data Protection Impact Assessment
A formal risk assessment is required before high-risk processing, such as large-scale automated screening.
Guidance from the Information Commissioner’s Office stresses that employers remain accountable, even if a vendor supplies the system.
Similarly, the EHRC has warned that algorithmic bias is squarely an employer’s problem.
Sources:
What are the accountability and governance implications of AI? | ICO
Assessing the equality impact of AI-based technology: six discussion points | EHRC
What should you do?
- Audit procurement rigorously: demand vendors show how they tested for bias and explain logic in plain English.
- Test for bias in outcomes, not just inputs: check for disproportionate rejection rates by gender, age, disability, etc.
- Carry out a Data Protection Impact Assessment before deployment: document risks, mitigations and consultation with staff representatives where relevant.
- Keep a human in the loop: ensure a manager reviews and can override AI-driven shortlists.
- Update privacy notices: spell out when and how automated tools are used.
- Train human resources (HR) teams: on spotting and escalating concerns about AI-driven decisions.
- Review regularly: algorithms evolve; so must your compliance checks.
How confident are you that your organisation could explain, in public, how its HR algorithms actually work?