Author Profile Picture

Charles Hipps



Read more about Charles Hipps

How big data can eliminate bias and elitism in candidate selection


Studies by the Social Mobility Commission have shown that the largest industries are failing to hire talented youngsters from less advantaged backgrounds because they recruit from a small pool of elite universities and hire those who fit in with the culture – still favouring middle- and higher-income candidates who come from a handful the country’s top universities.

But can this notion be challenged by reinventing blind recruitment and using predictive analytics to help recruiters get to the candidates that no one else knows about?

Big data is increasingly viewed as a strategic asset that can transform organizations through its use of powerful predictive technologies. Eliminating bias in recruitment is feasible through the use of such blind algorithms.

Recent studies from Royal Holloway University of London and the University of Birmingham suggests managers often select candidates for client-facing jobs who fit the ‘traditional’ image of a role, with many placing as much importance on an individual’s speech, accent, dress and behaviour as on their skills and qualifications.

This introduces disadvantages for candidates whose upbringing and background means they are not aware of ‘opaque’ city dress codes – for example, some senior investment bankers still consider it unacceptable for men to wear brown shoes with a business suit

Top recruiters might receive over 150,000 applications a year and rising from a mixture of core and non-core schools and not have time to sift fairly.

Big data can ease this pressure. Used well, it will sift and flag to you, candidates that have all the key indicators of success you’re looking for, but that didn’t go to a target school – i.e. schools that are not on anyone’s core schools lists but do have exceptional talent.

Inevitably, algorithmic techniques like data mining can help to eliminate human biases from the decision-making process.

But, crucially any algorithm is only as good as the data it works with.

To follow this path, it is important to be self-critical of your use of big data to ensure that you do not inherit the prejudices of prior decision-makers or reflect the widespread biases that persist in society at large.

A blog post by the White House staff captions this perfectly.

It cautions: “The era of big data is full of risk. The algorithmic systems that turn data into information are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them. Predictors of success can become barriers to entry; careful marketing can be rooted in stereotype.

Without deliberate care, these innovations can easily hardwire discrimination, reinforce bias, and mask opportunity.”

So, how can a recruiter go about making big data a viable alternative for them to be a fairer way of accelerating a recruitment programme?

Crucially any algorithm is only as good as the data it works with.

The key is for HR and data/information officers to work together to devise a predictive system that works best for the organisation. At WCN, we advocate “Groupthink” based algorithms as the best way to naturally reduce bias.

Having such collective thinking means “disparate treatment” or intentional or subconscious bias is removed because you don’t feed in any candidate diversity data and therefore “disparate impact” (unintended adverse impact) is tuned out.

Firms taking this approach could evidence that they are not intentionally discriminating against a candidate or applicant, motivated by at least in part, by some protected category or any associated neutral factors.

Instead, they can show how their recruitment process is built on better algorithms that identifies & quantifies specific features that determine a candidate’s success.

How to stop algorithms discriminating

It is important that for data to become a useful predictive tool, recruiters must first adopt any algorithms in ranges across the application population to mitigate the risk of discrimination.

There are several stages to making this happen, which I am briefly summarising here.

  • Firstly, you must define what you want your big data brain to do by defining a decision & context, e.g. Recommend “interview” or “hire” on receipt of an application? And based on what criteria? Hire decisions, Performance in job
  • Secondly, you need to find a large data set of input data and outcomes and then finally you can build the best algorithm for you, run it and then analyse, subsequently cleaning & structuring the data and trying different algorithms as needed, looking at outliers to determine the business value.

Harnessing the potential means you’re not just dismissing elitism theories but you’re also identifying & quantifying any historic bias reducing bias in future decision making. It means you can mitigate the influence of disparate impact and focus on just winning great hires.

Throughout a trial-test-refine process of using big data, you should be able to Identify ranges in which there is no significant statistical impact and go on to build better algorithms without disparate impact.

By identifying & quantifying the features that determine a candidate’s success, you will be better able to quantify the disparate impact and correct algorithm.

Reporting wise, it helps with ensuring that you are providing stronger evidence and recordkeeping to support hiring decisions and can accept more applications with lower resource implications.

Clever algorithms replicate your collective decision making, reducing the influence of bias by individuals or process.

This can lead to a greater democratisation of recruitment by:

  • Recommending candidates who unequivocally perform better: delivering more sales, staying longer…
  • Better record keeping / reproducible decision making
  • Removing the economic bias to exclude.
  • Enabling employers to better understand what drives performance and thus..
  • Moving away from the familiar “tried & tested” and so on…

The automated cycle of recruitment means you should have a better talent pool of candidates coming through that reflect the future leaders you want joining your organisation.

The business benefits of good data techniques

Clever data techniques will recommend candidates who unequivocally perform better and thereby deliver more revenue, profit, or stay longer in the business.

It means that a business can go on to use algorithms based on how employees perform in the business rather than what line managers decide at interview.

In so doing, it is feasible that technology could effectively free up 66 months of recruiter resource each year – time which could be spent on adapting better engagement techniques to ensure a leading candidate with many offers at their disposal is more likely to buy into the culture, mission and vision of our clients ahead of market competitors with equally tempting offers on the table.

Clever algorithms replicate your collective decision making, reducing the influence of bias by individuals or process.

In the recruitment game, closing down top talent ahead of competition is a big challenge and this technology is helping to offer a solution to this and reduce decline rates to suit corporate objectives.

It also has the potential to widen the spread of candidates to be more diverse to talent and avoid challenges around elitism.

The technology can automatically flag to a recruiter, candidates that have all the key indicators of success they’re looking for, but that didn’t get a qualification from the likes of Oxbridge.

Recruiting is the perfect shop window for predictive analytics for recruiters who want to ensure they are hiring the best quality candidates.

After all, the market for top talent is highly competitive and getting a hire wrong isn’t only costly, poor hiring can lead to lower productivity, reduced levels of employee morale and engagement and ultimately more attrition. It is a vicious circle.

2 Responses

  1. I like this pointer to HR to
    I like this pointer to HR to find new tools in technology to crack old problems, such as bias in recruitment. There is an interesting paradox: if the organisation is self-aware sufficient to avoid the bias that algorithmic logic can bring, then likely they could do much without the big data to achieve the same liberty from bias….and if they don’t, HR are reliant on the systems architects to be particularly rigorous. Interesting questions for HR ahead with big data.

Author Profile Picture

Get the latest from HRZone

Subscribe to expert insights on how to create a better workplace for both your business and its people.


Thank you.