Author Profile Picture

Charles Hipps

Oleeo

CEO

LinkedIn
Email
Pocket
Facebook
WhatsApp

Ensuring your data is readable and actionable in talent acquisition

pinkypills

Forrester reports 74% of firms say they want to be “data-driven,” but only 29% are actually successful at connecting analytics to action. Actionable insights appear to be the missing link for companies that want to drive business outcomes from their data.

In recruiting, when you’re using aggregated data sets to make decisions, you need to make sure that you’re correcting for existing biases. Obviously, intelligence technology offerings have advanced a long way in twenty years, but it would be foolish to assume that bias is no longer a problem – instead when biases are recognised, it’s possible to adjust for them.

Sounds simple when I say it like that but actually, it is no mean task. To do this means recruiters need to become better at interpreting any data they choose to refer to. Evaluate it, perhaps using a data mining tool, and then be sure to make decisions based on it. At the point of analysis though, you need to be challenging very carefully about the risks of bias and how you are going to correct for that. Intelligence cannot just be artificial – humans have to understand how it will go on to be used. 

There is a fundamental point here – data itself will not get you a decision

You need to have data, you need to be able to generate insight and you also need to be able to link that to an action. So therefore the humans should never be taken out of the equation. Technology itself is simply just an enabler. And the more complex the technology becomes, the simpler it needs to be.

We’re all used to having technology influence our everyday decisions – from the music we listen to, to the road directions we take to get from A to B.  To apply this to talent acquisition, intelligent technology needs to be just as easy to use and compelling in its results.

If it’s not actionable, it’s just numbers on a computer screen and where’s the point of that?!

When asking machines to make decisions for us, there remains a risk that they will throw up potential discrimination issues.

At WCN, we’ve been adopting this approach for two decades now. Our online recruitment technology has revolved around making recruiters’ lives easier and from its early days, we have focused on making talent acquisition a function that can prove demonstrable ROI to a business balance sheet.

It started with simple pie charts and bar graphs that can be easily lifted and excerpted into a board report. As the HR function became akin to a financial one, it has evolved into more complex spreadsheets and trackers that incorporate the charts/graphs but offer real numerical evidence to back up the claims – be that time to hire, cost per hire, source effectiveness tracking or simply the cost of participating in events for new jobseekers.

Now in 2017, the approaches are getting more sophisticated. WCN is working with graduate recruiters to use predictive analytics to broaden the scope of where emerging talent will come from. To fine tune our algorithms, we have worked with University College London and have hired data scientists to translate complex datasets into easy-to-read dashboards that can potentially inform future strategies.

Our approach from the start has been to help develop an algorithm that helps recruiters to quickly make decisions. The technology works on the basis that an insight that drives action is typically more valuable than one that simply answers a question–especially an insight that makes you rethink something and pushes you in a new direction. In our case, this is achieved by automatically flagging those whose applications are worthy of inviting to interview and predicting those who are likely to have an offer extended to them.

So, there is an actionable insight by revealing which applicants are a better fit for positions within the company, correlating skills and work values to numbers and percentages. It’s not about the name on the CV or cultural background of an interviewee. Instead, companies can focus on the candidates with the right expertise, experience and potential to be productive within their already established teams, provided the humans in the equation eschew their own biases.

An example: loads of applications for a graduate position

To take just one example, in an increasingly competitive job market, an organisation may receive applications from hundreds of highly-qualified, hopeful graduates for just a few vacancies.  Often, it will take a disproportionate amount of human effort to sift through them.

Crucial experience, context or personal attributes may be lost in the morass of information. Application forms may be divided amongst several people who each take a slightly different approach.  Some may not be given due attention, simply because they are considered at the end of a long day. 

As a result, gifted candidates may be overlooked due to human fallibility or unintentional bias. To meet these challenges, AI systems can put to effective use in conducting an initial review of applications, to produce a shortlist of candidates for interview.

One of the most frequent questions I get asked when talking about the attractiveness of adopting artificial intelligence into recruitment techniques to help boost diversity is what happens when machines discriminate? 

And a blog post by the White House staff captions this fear perfectly. It cautions: “The era of big data is full of risk. The algorithmic systems that turn data into information are not infallible—they rely on the imperfect inputs, logic, probability, and people who design them. Predictors of success can become barriers to entry; careful marketing can be rooted in stereotype. Without deliberate care, these innovations can easily hardwire discrimination, reinforce bias, and mask opportunity.”

The reality is – like any traditional talent acquisition methodology – balancing the risks and opportunities of artificial intelligence is imperative. In recognising the opportunities that AI brings, we must also be mindful of the possible pitfalls.

In particular, workers and job candidates are protected from discrimination related to certain protected characteristics (such as age, disability, sex, race, sexual orientation and religion or belief).

If it’s not actionable, it’s just numbers on a computer screen and where’s the point of that?!

When asking machines to make decisions for us, there remains a risk that they will throw up potential discrimination issues.

Used well, blind recruitment can help ensure demonstrable insights that show differences as a strength. Protected data is not used as a criterion by the machine to make decisions. But it is crucial to keep in mind that whilst blind recruitment limits the impact that unconscious bias may have, a degree of personal responsibility is required still.

Therefore, it makes sense to adopt a collaborative approach that is aimed at spotting issues early, agreeing who is responsible for putting them right and refining automated processes to avoid repeat mistakes. We advocate adopting internal guidance for employees who use (or, as the case may be, develop) AI tools and an external policy or agreement which sets out clearly how discrimination issues will be managed.

Talent acquisition can be intelligent by following this style of technological exploration. Readable and actionable data is the future for doing this. We anticipate lateral hiring to follow suit. Foresight is the future and follows in the footsteps of the many already using horizon scanning for workforce planning. Are you ready to take the leap and change the game for the better?

Want more insight like this? 

Get the best of people-focused HR content delivered to your inbox.
Author Profile Picture