In the second of four articles on evaluation, Martin Schmalenbach ventures beyond Kirkpatrick to look at other approaches to evaluation.
Read part one here.
Apart from the widely known work of Kirkpatrick there are several other approaches to evaluating training. Each has its own features and benefits that may make it of more use in certain scenarios.
Some notable approaches include Tyler’s Objectives Approach, Scriven’s Focus On Outcomes, Stufflebeam’s CIPP (Context evaluation, Input evaluation, Process evaluation, and Product evaluation) the related CIRO framework (Content evaluation, Input evaluation, Reaction evaluation, Outcome evaluation) Guba’s Naturalistic Approach and the V Model (Bruce Aaron).
Tyler mentions that one of the main problems with education is that educational programmes “do not have clearly defined purposes.” By “purposes” he means educational objectives. This objective-based approach to evaluation is at the core of what Tyler proposes. His approach to evaluation follows these steps:
1. Establish broad goals or objectives.
2. Classify the goals or objectives.
3. Define objectives in behaviour terms.
4. Find situations in which achievement of objectives can be shown.
5. Develop or select measurement techniques.
6. Collect performance data.
7. Compare performance data with behaviourally stated objectives.
Discrepancies in performance then lead to modification and the cycle begins again.
This is in many respects Kirkpatrick’s Level 3 but expressed in more detail. I’m assuming that the training/education occurs somewhere between steps 3 and 5, though it is possible to do some base lining (i.e. get some ‘pre training’ performance data) though the language of step 7 suggests you compare post event behaviours with those that you wanted to develop, not how things were before.
However the objectives, defined in terms of behaviours, seem less obviously connected to the kind of results that facilitate evaluation in ROI terms. There is nothing in here though about the impact of other factors on behaviours, such as culture, structure, targets, and so on.
Scriven’s Focus On Outcomes requires an external evaluator, who is unaware of the programme’s stated goals and objectives, to determine the value and worth of that programme based on the outcomes or effects and the quality of those effects.
In one sense this is fine when focusing on the organisation’s performance – it is easier to see the effect of the programme perhaps than when looking at individual performance or goals. There could be issues about individual bias and interpretation, and to what extent the evaluator is or can be briefed. This model by definition cannot readily forecast the likely outcomes and so does not lend itself to ready use in an ROI context, especially as it makes little reference to determining root causes for poor performance or unwanted behaviours.
Stufflebeam’s CIPP model is what is known as a systems model. Primary components include:
* Context – identify target audience and determine needs to be met.
* Input – determine available resources, possible alternative strategies, and how best to meet needs identified above.
* Process – examine how well the plan was implemented.
* Product – examine results obtained, whether needs were met, what planning for the future is required.
Interestingly this model explicitly looks at both process and product – it is both formative and summative in focus (defined in part 1). Evaluation of the likely outcomes is not included prior to actual delivery of training, and so the model does not lend itself to ready use in an ROI context without further modification. The ‘context’ element further suggests that training is part of the solution and so assumes in part a prior step which makes this determination, and so as this model stands it is further removed from the needs of ROI-based evaluation. Unlike the Phillips and Kirkpatrick models this does require the effectiveness of the process to be looked at – this is often referred to in other texts as ‘validation’ in order not to be confused with evaluation – i.e. focusing on outcome – did it deliver its objectives?
The CIRO model developed by Bird et al encompasses several of Kirkpatrick’s levels, specifically levels 1 and arguably 4, if the outcomes are expressed in terms of business impact. The main elements are Content, Input, Reaction and Outcome. It is very similar to the CIPP model in most other respects, and, to my mind, shares in a lack of detail and prescription in how to undertake any of these four main elements.
It could be argued that both the CIPP and CIRO approaches follow Kirkpatrick and Phillips in using control groups and estimates of improvement by subject matter experts in order to deliver a repeatable process that can begin to answer questions of value and good use of limited resources.
The Guba & Lincoln model places its emphasis on collaboration and negotiation among all the stakeholders as a change agent in order to “socially construct” a mutually agreed-upon definition of the situation.
All the stakeholders involved (including the evaluators) are assumed to be equally willing to agree to change. On further reflection this probably most closely resembles reality in organisations where evaluation is required after the fact. In the absence of any objective tools, the stakeholders collectively agree a judgement on the value of the programme in question. It lends some structure to the notion that training “is done on trust”. It seems not to lend itself to the rigour and objectivity demanded by an ROI approach.
The ‘V Model’ as adapted by Bruce Aaron is based on an approach in the IT world used for developing software.
Imagine a ‘V’ where the left hand slope is labelled analysis and design. From the top, moving down the slope you will find ‘business need’, then ‘capability requirements’ then ‘human performance requirements’ and finally at the bottom, where the left and right hand slopes join, you will find ‘performance solution’. From the top of the right hand slope (labelled measurement and evaluation) you will find ‘ROI / business results’, then moving down we come to ‘capability status’, then ‘human performance impact’.
The connection between each element and its element on the opposite slope and at the same level is deliberate – the symbiosis almost between analysis and design, and measurement and evaluation. It is both formative and summative in looking at capability/process as well as solution/product.
It is very much designed to support the ROI approach, though it is not immediately apparent if the ROI and evaluation can be readily forecast before committing to the solution – arguably the model supports the concept even if it is light on the details of how this is done.
Interestingly none of the models, with the possible exception of the ‘V’ model, suggests who should be responsible for doing which bits, though with the bulk of the thinking having been done by people connected to the training world, there is an assumption, borne out in practise, that the trainers do it (and take the hit for a poor result).
Further reading
An interesting timeline of evaluation can be found at http://www.campaign-for-learning.org.uk
Further brief information is also available at http://web.syr.edu/~bvmarten/evalact.html and has been a useful source for this article.
About The Author
Martin Schmalenbach has been enhancing performance through change and training & development for more than 10 years in organisations such as the RAF, local government, through to manufacturing and financial services. He has degrees in engineering and management training and development. For the past three years he has focused on developing and implementing a rigorous, robust and repeatable process for ensuring interventions contribute to the bottom line. You can find out more at 5boxes.com and How Did I Get Here?