While AI offers many possibilities for the HR sector, it is problematic when it comes to bias-free recruitment.
Artificial intelligence (AI) is seemingly ubiquitous these days, with potential uses for it rearing their head in pretty much every business sector, and talent management is no exception.
The latest development in this regard has been an artificial intelligence robot called Tengai, which featured prominently across mainstream media as the latest ‘miracle solution’ to the problem of bias in hiring.
Arguments in favour of AI as the catchall answer to the problem of bias have been popular for a while now, with the common belief that, as more companies automate the recruitment process, we’ll reach a point when human recruiters will no longer be needed. The human element, along with the bias that comes with it, will be completely eliminated from recruitment, according to some. Is this really the case though?
There is a lot of contention around the issue, with the controversial shutdown of Google’s AI ethics board after little over a week showing just how just much on a hot button issue it is.
The question we’ll ask in this article, therefore, is – can robots truly offer a bias-free solution?
Robot recruiting: the problems
First of all, it’s important that we analyse Tengai, and the claims of Furhat Robotics, its creator.
Tengai, revealed to the media in Stockholm last month, is designed to accurately mimic human speech and facial expressions to look and act as similar as possible to a human recruiter.
For the last few months, Furhat has been collaborating with one of Sweden’s largest recruitment firms, TNG, with the goal of eventually offering job interviews free from any unconscious bias.
With programming teams’ notoriously homogenous, male-dominated and lacking racial diversity, historic biases may already be ingrained in certain AI hiring functions.
The company claims that its objective approach of interviewing candidates with the same questions applied to everyone will eliminate bias. Perhaps as was expected, those involved in the process have given it glowing reviews, claiming that it would herald a new age of recruitment with zero bias.
While Tengai may seem revolutionary on the surface, there are a number of problems with not only this robot, but all AI recruiting.
Firstly, with the vast diversity of race, gender and ability we have in workforces today, is the feature of only asking the same set of questions to all candidates really the best way to remove bias?
Certain groups will require different approaches, and Tengai’s ‘one size fits all’ approach may in fact be hindering diversity efforts.
Unfortunately, the system will carry some form of bias in order to make decisions, and individuals that fall outside of what the ‘system’ determines as the ‘right’ way to behave will stand no chance.
[cm_form form_id=’cm_65a14c3f5da64′]
Furthermore, with programming teams’ notoriously homogenous, male-dominated and lacking racial diversity, historic biases may already be ingrained in certain AI recruitment functions.
This will be especially true if the AI’s are trained on historic company data that has been mined from generations of homogenous workforces.
In fact, a recent study of the facial recognition software of Microsoft, IBM and Face++ reflected this. The systems were shown 1,000 faces, and told to identify each as male or female.
All three did well discerning between white faces, and men in particular. When it came to dark-skinned females, however, there were 34% more errors. There are plenty other such examples.
For instance, a programme used by US courts for risk assessment and prison sentencing was extremely biased against prisoners of colour, labelling these defendants as far more likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%).
Innovation is always welcome
Despite this, it would be remiss to only single out the negatives associated with AI in hiring when there are so many exciting developments in the sector. Simply put, AI has many advantages over humans.
Machine learning technology can be empirically trained with data, allowing companies to use tried and tested statistical relationships to make hiring decisions.
The human mind is far weaker when it comes to the kind of pattern recognition used by these technologies to identify traits that make good hires.
Most people simply have a built up idea of the traits that they desire or avoid in a candidate, but have no idea what the success or failure rate is of people with those traits.
The real obstacle to bias-free recruitment is the traditional hiring process, and without root and branch change to this, true diversity cannot be achieved.
AI and machine learning, however, provides hard data that either confirms or rejects these beliefs, and offers a far better picture of how successful a potential candidate could be.
There are many concrete examples of where the removal of bias can be aided by technology. For instance, the writing of job adverts. Studies have found that the use of certain words or gendered terms within job descriptions can dissuade female candidates from applying for roles.
Certain companies, such as Textron, can mine data from millions of job listings to gauge the gender tone of job postings and allow decision makers to tweak listings to help them attract a more diverse range of candidates.
Caution is needed
Ultimately, it’s clear that developing technology to try and better understand how to remove the unconscious biases is something worth exploring.
Despite this, a healthy dose of caution should come with any immediate rush of optimism toward robots such as Tengai.
Crucially, the real obstacle to bias-free recruitment is the traditional hiring process, and without root and branch change to this, true diversity cannot be achieved.
This is why developments like Tengai, which are really just superficial modifications to existing practices, will not work.
Interested in this topic? Read Recruitment: why AI and Big Data alone can’t solve bias.