Martin Schmalenbach examines the links between training needs analysis and evaluation. After all, he says, they're two sides of the same business coin.
Training Needs Analysis (TNA) is the process of identifying training needs for an individual or group of people. Its outcome is a clear set of training objectives to be met by whatever method (workshop, on the job training, e-learning etc.) is selected in the future. It takes place before any training is undertaken.
There are several traditional methods for undertaking a TNA:
- Competency gap analysis – usually as part of an appraisal process, people's current competencies are assessed against an existing competency framework, and any competencies where people fall below the required standard indicates a training need.
- Performance gap analysis – usually 'as and when' an individual is 'seen to be underperforming' and it's decided this is due to a lack of skills, knowledge, experience etc. The performance areas do not lend themselves obviously to any predetermined competencies in the framework or there is no competency framework in place.
There are other approaches but these two account for the majority of TNA activity in most organisations. Not every organisation has a formal competency framework, and not every competency framework covers all eventualities, so it is typical to see a combination of these methods in use.
There is no inherent guarantee that training undertaken as a result of only these approaches to TNA will have the required impact on the organisation, even though the training experience itself might be the best thing since sliced bread!
To ensure there is a desired impact on the organisation requires a firm link to be made between performance standards and what the organisation is actually trying to achieve. Competency frameworks can be seen as an attempt to make these links. Generally these links are either not present in all areas where they should be, or are not specific enough to ensure a sufficiently focused training intervention takes place to deliver the required outcome.
Finally, training doesn't take place in isolation – it's almost always part of a bigger picture and piece of work. If the objectives of this bigger piece can be achieved without the support of training, don't train!
Evaluation
Evaluation of training is the process by which a training intervention is assessed for impact and value given the resources used and any disruption arising (e.g. having a person away from work to attend training).
Traditionally the evaluation is conducted once the training has taken place. Donald Kirkpatrick suggested in his famous article in 'Training Magazine' in 1959 that evaluation can comprise 4 levels:
Level 2: the extent of the learning by each learner
Level 3: the changes to learners' behaviours in the workplace
Level 4: the impact of the training on the organisation's progress to achieving its objectives
It is rare that organisations make any credible attempt to determine if the training was actually worthwhile given the use of resources and disruption caused (Level 4 evaluation). For me this suggests management is being at best 'cavalier' with its resources, and at worst negligent in discharging its responsibilities. The excuse that evaluation of impact might cost more to do than the training cost in the first place is frankly not proven and a 'cop out'.
Linking TNA and evaluation
If you conduct a TNA to determine if training is needed, and this TNA is firmly and credibly linked to driving organisational performance, then any training it recommends is likely to have a desirable impact. Repeating the TNA after the training and demonstrating that the original training needs have been met (because the repeat TNA suggests no further training needs) seems like one reasonable approach to evaluating!
If you only evaluate the impact of the training after it has taken place, and discover the training has not had enough of the desired impact, it is too late to turn back time and fix things. All you can do is move on, either coping with the current situation or taking remedial action perhaps through further training. Either option consumes additional resources and can even mean that desired outcomes can never be reached. It doesn't do the reputation of the training function much good either!
Surely it is better to evaluate the likely impact of the training before committing valuable resources to it? Some will argue that a good TNA process will do just this. I agree if the TNA is firmly, explicitly and robustly linked to achieving the objectives of the organisation, and that the root causes preventing these objectives being met, as well as the core drivers for achieving these objectives, are directly addressed by the training it recommends.
How can we do this?
- Clarify the problem and the desired outcomes – describe both in terms of specific and well-defined measures considered important to the organisation. Describe both also in terms of observable behaviours. Note the current values and behaviours as your 'baseline'. Avoid any reference to solutions and training at this stage.
- Determine the root causes for the current situation, and identify also any core drivers that push towards the desired performance outcome.
- Do more of what works, and less/none of what doesn't. By this I mean select a combination of root causes and core drivers that when tackled, give you just a bit over the desired outcome. By only just achieving the desired outcome you get what you require, but probably for the smallest effort and resource usage. In tackling these root causes and core drivers do more of what already works and do new stuff to close the gap. Use the time & effort saved by stopping doing what doesn’t work to do the new stuff. This means your work load should be largely unchanged! It also makes it easier to spot the areas where training is essential if the stated outcomes are to be achieved. This is in effect your TNA.
- Review progress, using your baseline and clear problem/goal definitions, to determine if you've achieved the required outcomes. Find out why any shortfalls have occurred and act on this information in the same way you've tackled the original problem. I think this might be called continuous improvement…!
Perhaps the surprise here is the realisation that in order to be sure you can demonstrate the value of any training, you have to identify explicit and clear descriptions of the current situation and desired outcomes, and develop a route from the current situation to the desired situation by way of root cause analysis for example. In doing this you gain both the means to evaluate before training – something all managers would ideally like to be able to do – and to develop the required TNA!
Are TNA and evaluation two sides of the same coin? I'd say they're two sides of the same 'business improvement' coin!
One Response
Using multisource feedback (360) as both the diagnostic and the
Martin, I’m fully on board with you on this!
A powerful tool in the trainer’s armoury is 360. Used initially, with a well-designed questionnaire, it provides a great diagnostic tool for assessing current effectiveness and where strengths and development needs lie. As the feedback is provided from a variety of sources of people who see the individual in action, it has great credibility. Where the survey-subject provides self-ratings, it will also clearly identify blind spots.
The feedback conversation (facilitation) then provides the basis for exploring how strenghths can be built upon, and how skills deficits might be addressed. This may be more about doing things differently, rather than about attending a course or two, and is very cost effective.
The initial survey benchmarks the performance level before any training and development activities. The repeated 360 after a suitable time-frame, using the same questions and ideally the same respondents, then measures the changes – hopefully improvements! – that have taken place.
If the jobholder is in a leadership role then you might even try to put some financial measures to the change to get a crude value of the ROI. If the individual leads a workforce whose salary bill is say $100k, and the improvement is 10% on the survey performance ratings after the event, then one might argue that the improved 10% will generate $10k’s worth of improved team performance, recurring each year. (I know this is a crude measure, but it’s better than nothing.)
At a collective level, where many identical surveys are being run, your before and after measures clearly indicate:
1. where are the initial, collective, training needs, and
2. whether or not interventions have been effective, as judged by the changes in the overall survey assessment results.
Clearly competency frameworks have their place in TNAs, but the flexibility in 360 is that the trainer can work with line managers to identify what are the behaviours that really make a difference in performance for specific roles, drilling down to the specifics that need to addressed.
It’s a great way for trainers to demonstrate with hard evidence that they have added value to the organisation.
Harvey