Author Profile Picture

Carolina Merlin

Mauve Group

Compliance Manager

LinkedIn
Email
Pocket
WhatsApp
Reddit
Print

The legal risks of AI-driven recruitment and redundancy (and how to govern them properly)

Over half of business leaders regret layoffs made using AI-driven tools. But regret is not the biggest concern here – legal risk is. Carolina Merlin, Compliance Manager of Mauve Group, highlights the dangers of giving too much control to AI in HR decision-making, and why proper governance is essential at every stage of the employee lifecycle.
a sign with a question mark and a question mark drawn on it

Summary: AI tools used in recruitment and redundancy create exposure under UK employment law, data protection rules and the Equality Act 2010. Employers cannot outsource accountability to algorithms – legal responsibility always rests with the organisation. AI can support HR decisions when applied with human oversight, regular audits and documented independent judgement at every stage.


Recently, tech giant Amazon confirmed 16,000 job cuts, hours after staff were informed of a new round of redundancies via an email sent in error.   

The email, accidentally sent to a number of staff, and then quickly unsent, referred in its subject line to Project Dawn, the internal codename for the layoffs.

In 2018, the same company scrapped an AI recruitment tool after it was found to discriminate against women, having trained on historically male-dominated CV data. The tool was designed to improve efficiency, but instead created legal and reputational risk.

These mistakes serve as a reminder that processes around hiring and redundancy must be handled sensitively. And as AI becomes increasingly integrated into HR workflows, the profession must take extra precautions.

In today’s world, AI can influence HR decisions like recruitment, performance scoring, and even redundancy modelling.

But data is already showing that decisions driven by AI tools can backfire. Research from Orgvue found that 55% of business leaders regret layoffs made using AI-driven workforce planning tools, highlighting that the issue here is governance, not technology. While AI promises efficiency, without proper oversight it can undermine control at precisely the moments HR needs it most. 

The risk: When AI undermines control 

AI tools can now be used at almost every stage of the employment cycle. The CIPD reports 79% of organisations use technology to support recruitment, including AI-enabled tools. 

Yet AI is being adopted and implemented faster than it’s being regulated. And when HR delegates decisions to systems, the output of which it has little control over, this naturally leads to risks. 

AI has learned all it knows from humans, including bias. And so one major risk is that prejudice can embed itself into hiring algorithms. The case of AI using sexist hiring processes shows how historic workforce imbalances can shape automated outcomes. The Equality Act 2010 does not forgive discrimination on the basis that it was committed by software. 

Another risk is that trust can be broken through the ubiquitous use of AI tools. According to research from the HOW Institute for Society, 95% of employees see moral leadership as essential, but only 10% of leaders commit to consistently embodying these principles.  

The TUC found that 60% of workers believe AI will increase workplace surveillance. If employees feel they are scored or ranked by invisible systems, this will lead to low morale, in turn decreasing engagement and retention. 

For multinational employers, cross-border risks complicate matters. Rules on hiring, data protection, and redundancy vary, so an AI model compliant in one country may create exposure in another. Specialist expertise is required to assess both the tool and its local compliance. 

Employers cannot outsource accountability to an algorithm. Under UK employment law, legal responsibility rests firmly with the organisation, regardless of whether a decision is informed by human judgement or AI-enabled systems. 

The UK GDPR and Data Protection Act 2018 restricts solely automated decisions that have legal or similarly significant effects, including hiring, promotion, and redundancy. Organisations are required to provide meaningful information about the logic behind those decisions; ‘The system said so’ is not a defence before regulators or tribunals. 

Discrimination risk remains acute. If an AI tool disproportionately filters out candidates with a protected characteristic under the Equality Act 2010, the employer carries liability. The same applies to redundancy scoring matrices generated or influenced by AI. 

Unfair dismissal claims present another exposure. Employers must demonstrate a fair reason and a reasonable process. If managers cannot explain how an AI-generated or assisted redundancy ranking emerged, they weaken their position before a tribunal. The UK’s Employment Rights Act, effective from 2027, decrees that protection from unfair dismissal will become a right after six months of employment, where currently two years of employment is necessary before claiming unfair dismissal. The limit on the compensatory award for unfair dismissal will also be removed. These rules serve to protect workers, while making the consequences of noncompliant dismissal more severe for employers. 

In large-scale redundancies, employers must consult appropriate representatives and provide prescribed information, adding further complexity. An opaque model that pre-determines outcomes undermines meaningful consultation. 

Where AI adds real value 

Despite these risks, AI can support HR when used appropriately and applied with human oversight. HR departments can use predictive analytics to flag early signs of disengagement or burnout. When used responsibly, these signals allow managers to intervene with support rather than discipline. This approach contributes to better workplace wellbeing and plays a meaningful role in long-term retention strategies. 

AI can also model different economic situations and demonstrate how changes in demand might affect skills gaps or costs. This supports an evidence-based strategy but must be used in conjunction with human judgement. 

Organisations that treat workforce governance as a continuous oversight discipline, rather than a one-off system implementation, are better positioned to adapt as regulation evolves. 

Where HR leaders must step in 

In the age of AI, leaders need to retain control of key decision-making in areas including hiring, redundancy strategies, pay and employment status. 

While AI can be leaned on in the early stages to take on some of the brunt work, AI output must be reviewed by a qualified decision-maker who should test its reasoning and document their independent judgment. 

HR should ask practical questions about the AI tools they are using. How was the model trained? Does it embed historic bias? Can outputs be explained? Are audits conducted regularly? 

Regular internal audits by gender, ethnicity, and age help HR spot bias. If concerning patterns appear, tool usage should be paused and investigated. 

Remaining transparent about AI usage strengthens trust and compliance. To reduce suspicion and mitigate legal risk, businesses should post clear privacy notices and make sure their employees have access to explanations and open consultation processes. 

Regaining control in an AI-driven workplace 

Organisations are beginning to see the consequences of innovation outpacing judgement. Orgvue’s findings highlight how easily leaders can regret decisions made with excessive reliance on automation.

AI will continue to shape recruitment, retention and redundancy; the question is not whether HR should use it, but how. 

HR leaders must embed governance at every stage of the employment lifecycle and insist on explainability, documented human review and outcome audits. In an AI-driven workplace, control is preserved not by resisting technology, but by governing it. 

Key HR takeaways

  • Legal responsibility for AI-influenced employment decisions always rests with the employer. Workforce governance must remain a leadership responsibility – not a system output. 
  • AI adoption demands structured, risk-led oversight. Technology should be implemented within a clear governance framework, supported by documented human review, audit mechanisms and compliance controls that stand up to scrutiny. 
  • Cross-border complexity requires jurisdiction-aware governance. AI tools cannot be used without human oversight when assessing local employment law, data protection requirements, and consultation obligations in each market of operation. Global workforce compliance cannot be standardised by AI alone. 

Your next read: Over half of leaders regret replacing people with AI: Will you be next?

Want more insight like this? 

Get the best of people-focused HR content delivered to your inbox.
Author Profile Picture
Carolina Merlin

Compliance Manager

Read more from Carolina Merlin

Newsletter Registration

Click X (right) to close.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.
Name*
Email*
Privacy*
Additional Options