No Image Available

Annie Hayes

Sift

Editor

LinkedIn
Email
Pocket
Facebook
WhatsApp

Feature: Training Evaluation Part 4 – Taking control for ROI

pp_default1

In the last instalment of this series Martin Schmalenbach goes beyond Kirkpatrick levels 1 and 2 to look at the value of control groups, and offers some tips on how to get a real sense of ROI. Read part one, part two and part three.


A popular technique used when evaluating training beyond Kirkpatrick levels 1 and 2 is the control group. The concept is quite simple. Take two groups, one group goes through the training, the other doesn’t. Compare the performance of each group after training. The group that has undergone the training is presumed to perform better than the group that didn’t, and the value of the increased performance can be attributed wholly to the training. The return on investment (ROI) can then be calculated with ease.

The fact that one group doesn’t get to do the training, at least, not for a while, means that not everybody can operate at a higher level of performance at the earliest opportunity – there is a cost here in terms of lost opportunity.

Note also that this approach can only enable an evaluation to be done after the fact – you cannot readily forecast any ROI with this approach – which may not help managers decide where best to deploy limited resources.

There is also the presumption in this approach that any change in performance is due to the training (ie just one factor), and thus performance change is likely to be positive. If the performance change were negative would there be moves to demonstrate that some other factor was responsible, and what factors were at work? Using control groups in this way is unlikely to lend credibility to the evaluation because it is not clear what factor or factors are at work, including the training.

Credibility

If control groups are to be used, perhaps as part of a wider evaluation strategy, then there are some steps to be taken to ensure credibility of the results.

Briefly these are:
* Identify all the factors that could influence the performance levels.
* Create enough groups so that each factor can be isolated in terms of demonstrating its contribution to increased performance.
* Ensure that each group is ‘statistically similar’ to each other, for example by randomly assigning personnel to each, and ensuring no bias creeps into the minds of each individual as a result of knowing they are taking part in what is effectively an experiment (the so-called Hawthorn Effect).

Generally, for two factors a minimum of four groups are needed, for three factors it increases to eight groups, for four factors 16 groups and so on. Typically there will be anywhere from four to eight factors accounting for 80% or more of any performance change. That’s a lot of control groups! And a lot of elapsed time to conduct all the ‘experiments’.

Now perhaps it becomes clear how the use of control groups in evaluations in the past have not been enough to help with credibility – in fact such an approach opens the door to rapid destruction of any credibility that the evaluation might get from other techniques.

“Expert” estimates of improvement

A second popular technique used in evaluations is to ask experts in the form of the delegates attending the training and/or their line managers, perhaps even some ‘real’ experts with a lot of experience of the processes affected by the training, to estimate the change in performance as a result of the training intervention.

This is a great way of involving those people with a clear expertise and reputation in the processes affected. But do they consider all the possible factors that could contribute? Can they consider them? Their very deep expertise and experience may blind them from some well-hidden but significant factors. Or it might not. All you have is an estimate, albeit from some credible people.

The point is, nobody can know without carrying out a rigorous and robust root cause analysis. And this needs to be done before the training and other interventions to ensure that ‘the waters aren’t further muddied’. We can perhaps all appreciate that in legal cases each side can produce their own highly regarded experts to refute or otherwise prove the assertions of each side.

This technique has its merits, but not in the way not is used here. In this situation it suffers the same pitfalls as the control group approach when badly used.

Root cause analysis

A root cause analysis, in its simplest form, is a process that is followed to identify all possible root causes contributing to a phenomenon, such as poor performance, and to then identify those few root causes that account for the vast majority of that phenomenon.

For example, there are well over 100 root causes that affect the accuracy of a Roman catapult, many quite obscure, such as air density, air gusts, turbulence, temperature of the load, friction between the load and the bucket or cup that holds it at the end of the catapult arm, right up to some obvious ones such as tension in the ropes, the mass of the counter-balance and the direction the catapult is pointing in. But only about six are of practical interest as they generally account for perhaps 90% or more of the variation in accuracy.

Without this clarity we can never know what needs to be changed to improve the accuracy of the catapult – we are just blindly fiddling with things in the hope we get it right, and sometimes we do and all seems to go fine for a while, then it all goes wrong again and repeating what we did last time just doesn’t seem to work any more.
Sound familiar?

So, do a root cause analysis and then you will know in advance the likely performance improvements and what you need to do to realise them. It makes the ROI bit not only easy, but credible.

Expectations

A further danger is not managing expectations of the client and others involved, and related to this, keeping everything a big secret.

So long as you help to manage the expectations of all involved, there can be no horrible surprises for anybody. This doesn’t mean that everybody is happy, just that you are less likely to be blamed when the desired outcome doesn’t materialise.

Keeping everybody affected by any intervention informed of what is happening, why, when, and with or to whom, means it is less likely you will encounter resistance, passive or otherwise, and more likely you will get at the information needed to help the client make some really firm evaluations before committing resources. That can only help your own cause in the long term.

About The Author
Martin Schmalenbach has been enhancing performance through change and training & development for more than 10 years in organisations such as the RAF, local government, through to manufacturing and financial services. He has degrees in engineering and management training and development. For the past three years he has focused on developing and implementing a rigorous, robust and repeatable process for ensuring interventions contribute to the bottom line. You can find out more at 5boxes.com and How Did I Get Here?

Want more insight like this? 

Get the best of people-focused HR content delivered to your inbox.
No Image Available
Annie Hayes

Editor

Read more from Annie Hayes