People are not robots. They have good days and bad days, families, complications and responsibilities. They arrive at work with feelings of overconfidence and insecurity, exhilaration and grief, hope and anger.
As HR and reward professionals, we need to ensure all the AI-driven innovation we use can live alongside that humanity and strengthen it, rather than aggravate or replace it. To get the best out of AI, we need to use it in the right way and approach each use case with curiosity, caution and care.
The upside: how AI can negate human empathy positively
Reducing bias in the hiring process
Properly designed AI can flag bias or help standardise applications, allowing empathetic recruiters more time to make fairer decisions with better data. By acting as a filter for unconscious bias, AI can help HR make inroads into equality in key areas:
Gender: Even empathetic recruiters can stereotype or make assumptions around leadership styles, maternity leave or finding the right ‘fit’ in industries considered “male-dominated”. AI tools can anonymise CVs and focus in on skills and experience, creating a fairer shortlist and giving women and non-binary candidates equal opportunity.
Age: In the same way, some recruiters might unconsciously assume older candidates are less adaptable, more expensive and lack digital savvy. On the flip side, younger candidates may be seen as inexperienced. By highlighting skill relevance and proven achievements rather than age, AI can help unearth capability.
Disability: Some disabilities are more visible than others triggering assumptions around performance, absenteeism or ‘complications’. Again, by screening for skills and paying less attention to gaps in education or employment, AI highlights capability without revealing disability status, giving candidates a better chance.
Ethnicity or background: A name or accent can trigger unconscious bias. Again, AI’s ability to be impartial can ensure candidates from underrepresented groups get an equal chance.
Assisting in pay and role evaluation
By standardising how roles are evaluated, AI applies the same rules to all positions. It can analyse the skills, qualifications, responsibilities and market data associated with a role regardless of the individual holding it.
This reduces the risk of subjective managerial discretion, or that women or other underrepresented groups might be paid less simply because of negotiation styles, stereotypes or historical bias.
Personalising employee support and freeing time for human connection
By automating many of the more tiresome admin tasks like payroll, annual leave requests and rota-creation, AI reduces the time HR wastes. It allows us to focus more on people and those high-value conversations.
By analysing engagement surveys, health data or career development needs at scale, the tech can also allow HR to act empathetically and tailor intervention or support where employees need them most.
Help with employee wellbeing
If used well, AI tools’ predictive insights can enable proactive care and prevention by detecting early signs of burnout, attrition risk or disengagement. Armed with this information, managers can step in and apply empathy before problems worsen.
The downside of using AI
There is one important caveat to all of the above: the idea of AI promising objectivity only holds true if the training data and algorithms are carefully selected and audited.
If historical hiring data is biased, an AI algorithm may only amplify bias rather than eliminate it. This is where the human touch – empathy and ethics – is still essential, even indispensable.
AI may filter bias but humans are still needed to interpret and validate. So what are the key dangers of AI in those same areas?
Risks in recruitment and onboarding
If poorly trained, AI-driven application tracking could default to simple settings, filtering candidates out based on keywords, gaps in CVs or non-traditional career paths.
This kind of robotic assessment can overlook human potential, ignore resilience and deny the kind of ‘magic’ cultural fit that a recruiter’s empathy might detect in conversation.
Missing the ‘human side’ of role benchmarking
A role’s value is not only down to the tasks it performs but also how it fits into the overall culture and team dynamic, which AI might miss or undervalue.
Invisible or soft skills like empathy, relationship management and coaching – often disproportionately held by women – will inevitably be harder to quantify and value in roles, further skewing any imbalance.
Performance management
There is a danger that AI-led performance monitoring reduces employees to numbers and metrics. By ignoring the context of personal challenge, home lives, creativity or efforts above-and-beyond, an algorithm will fall short of a manager’s more personal judgement of how someone is doing.
Employee wellbeing
Chatbots or automated mental health tools can offer generic support but will still miss nuance. If these are relied on too much, employees might feel their concerns are not being properly heard or validated.
When extended to conflict resolution, these tensions can be further stretched. No amount of AI sentiment analysis will allow a machine to mediate with sensitivity and in a way that reads body language, intent and facial expression.
The future of pay and reward in an AI world
Organisations need to future-proof their job architecture, skills modelling and competency frameworks, while considering the potential, pace and scale of AI integration. Consider the following three key areas:
- With AI taking on routine tasks, explore the transition from traditional role-based pay to models rewarding the value of specific tasks and the skills they apply to them.
- Explore the shift away from rewarding service, job title and role to compensating proficiency in new and in-demand skills and talents.
- Look at how you can move away from hierarchies towards more AI-enabled hybrid teams, and consider how to design and implement group incentives that foster collaboration and shared success.
AI cannot currently replace human judgement. But when trained well enough to genuinely augment human understanding, it can enhance our working practices by creating more time for our own application of empathy.



