There are some powerful use cases for data science in HR, whether that’s chatbots delivering real-time answers to important analytical questions, or the huge cost savings that can come from reducing employee churn.
So why isn’t everyone doing all this already? Here, I’d like to explain five of the most common blockers to making data-driven HR decisions, and the questions you need to ask if you want to make your journey into data science successful.
Blocker 1: Case studies
If you’re looking to start transforming HR with data science, there isn’t going to be a lot of case-study evidence to point to. Some companies want others to test the water before they jump in, but that isn’t going to happen here. Although large firms are making big investments here, the work is valuable intellectual property, which means the chances of details being shared are slim.
Ask yourself: Will I lead the way?
The only way to keep up in this evolving field is to have the self-confidence to be a pioneer. If you wait for five years, there still won’t be a lot of case studies. The benefits of Machine Learning (ML) in HR will be proven time and again, but many of the companies that lead the way in this still won’t have published the secrets to their success.
Blocker 2: Engaging business leaders
So, how to build a business case for something leaders might see as unproven?
Business leaders will only consider experimental work if:
- You can clearly articulate how much the current problem is costing
- You can clearly state how much money the data science project will save
If a data scientist says, ‘This will improve retention’ that’s not good enough: leaders will want to know what the expected return will be in pounds saved.
That means you have to build a case that is clearly rooted in business needs. What does the business need now and how would a solution build value? You need to avoid schemes where the business value is unclear, no matter how ‘interesting’ they seem.
Typically, data scientists’ strength has not always been explaining projects in a way that demonstrates its value or meaning to the business. Think like a marketer or PR person. What is the business problem? Lay that out. And make the case clearly and powerfully about how data science will overcome that problem, all the time looking at how it will generate profit or value.
Ask yourself: How do I present this in a way business users find useful and meaningful?
Let’s turn back to our example of workforce planning. It’s hard to argue that it will definitely improve employee retention because there isn’t public evidence to back that up.
What ML will definitely provide are forecasts about your workforce that are significantly more accurate than anything a human can do. That in turn allows management to make better informed decisions and manage risks well ahead of time. And that might well improve employee retention.
Also, be careful not to set up unrealistic expectations for any proof-of-concept work: the power and value of ML grows over time as more data becomes available. You are not going to get earth-shattering results in a one-month proof of concept.
Blocker 3: Picking the right data-science projects
You can’t become a data-driven business unless the analytics and data science you’re developing are adopted by the workforce and used in their daily processes. And they’ll only be adopted if they address a problem and are actionable.
The key word here is actionable. The business must be able to do something with the results of the project. For instance, there’s no point presenting predictions about which employees or types of employee are at risk of leaving if managers can’t then do anything with the results, because possible interventions like a pay rise or a promotion are not open to them.
One problem we see often is that data scientists, excited by the potential of an ML project, work on it for six months, and present it to the business, which has no use for the results. Or the project looks scary or detrimental to the wellbeing of the workforce. For example, a lot of performance-monitoring projects struggle to shake the image of ‘Big Brother’.
Ask yourself: How can I collaborate to build something that’s actionable?
Engage users in HR right from the start. Gather insight from HR employees about where there are challenges or bottlenecks that ML could help with. By collaborating early you can understand what the business needs and how best to get the result.
Aim for use cases that are positive, that people can really champion. Asking employees to get behind a scheme that monitors which of them are performing poorly won’t get as much traction as something that actually helps everyone do their job (like a digital assistant). You want to optimise the business rather than monitor the business.
Once you’ve got buy-in from management and users, what are the other potential blockers?
Blocker 4: Data quality and maturity
If you want to start getting valuable insights, you need good data quality. If there’s been an acquisition or merger, then there’s likely to be duplication and incomplete records, as well as legacy data systems that don’t easily talk to each other.
There’s also the matter of historical data sets. If you’re looking to analyse trends over time, you need several years’ worth of data. It seems obvious, but you can’t get a trend over 10 years unless you have complete data going back 10 years.
Ask yourself: How can we change the way we work to create high-quality data?
There’s no magic wand for this. Improving data quality often needs an investment in the source HR systems. That of course brings its own benefits – making routine reporting much easier and less time-consuming, for example.
But there’s no path to data-driven decision making that doesn’t start by climbing the hill of getting the data coherent and robust. Once that is done, the benefits can be concrete and huge – a tiny reduction in employee churn at a big company can save tens of millions each year.
Blocker 5: GDPR and compliance
The GDPR has been on everyone’s mind since it was introduced in 2018. The rule is that you can’t make an automated decision without the consent of those affected by it. But you can still use machine learning to make a prediction which helps a manager make a better decision.
Still, snags remain. How do you prove the manager wasn’t unduly influenced? That the model was transparent and didn’t suffer from bias? There are ethical and accountability considerations that may put people off.
Ask Yourself: How do we produce ethically sound data-science models?
The answer lies in a formal review process before algorithms are made live. Build a diverse ethics committee which signs off AI or ML models. That means the data scientists have to explain how the models work, and managers will have to explain how they are using the insights. This is not just tick-box compliance: these steps are vital to building trust and getting full buy-in.