Creating Simple but Valuable Training Evaluations

dreamstime_xs_49025879In my previous post I shared my thoughts about how we use ratings and recommendations as part of our evaluation strategies.  In fact, I actually talked about how we may now use these as a way to overcome some of the challenges around relying on the traditional evaluation sheet and to increase overall response rates on the basis that something is better than nothing.

But I’d now like to turn back to the evaluation form itself and share what I’ve done in this space, starting a few years ago.

For more years than I can remember, my main evaluation driver has always been to primarily measure the personal and business impact from learning.  You might say it’s a bit of a hard-nosed approach, but I’ve long believed that the true success of a piece of learning is what has happened afterwards.  Constant references to the fact that most learners will actually learn regardless of the approach has renewed my focus on concentrating on what learners could tell me about the impact the learning has had, rather than about the learning itself.

But I do also realise that we need to continually improve our offerings, so we can’t totally disregard what learners think about the training itself.  That said, as you will see, I think we should allow them to identify where there might be room for improvement, rather than lead them to conclusions.

First a quick recap of the two approaches that have guided my thinking and I’ll start with Kirkpatrick.  Love it or hate it, it’s the model we return to time and time again.

Kirkpatrick’s Four Levels of Training Evaluation
Level 1 Did they like the training itself?
Level 2 Did they learn anything?
Level 3 How has their personal performance improved?
Level 4 How has the business performance improved?

The second approach I’ve valued is storytelling.  Here – quite simply – you have conversations with stakeholders and learners to find out how things have changed for them since the training, seeking out lots of examples and mini case studies to use as evidence.

The work I’ve done in this area has sought to combine both approaches and was part of a piece of work to increase overall response rates to evaluations by reducing the number of questions asked to an absolute minimum.  We also wanted to introduce more follow-up surveys too, which are much more valuable than an immediate post-training reaction questionnaire.

In the end, I developed a three-question survey, focussing on what I still term my “killer evaluation questions”.

The first question:

“How much of the training will you be able to apply back on the job and give at least one reason for your answer?”

Using a percentage rating, together with a comments box, I wanted the learners to tell me how much of the content was relevant or useful to them; and why.  A high percentage score would mean we were developing learning that met their needs and stood a good chance of being implemented.  A low score would suggest we were off-the-mark and their comments would guide us to the reasons.

For me, this questioning approach sought to estimate Kirkpatrick Level 3 (and possibly Level 4), giving the learner the chance to only comment on those Level 1 (and possibly Level 2) aspects that really mattered to them.

My second question was:

“How successful do you think you will be in applying what you’ve learned back on the job and give at least once reason for your answer?”

Here I used a Likert scale, together with a comments box, to allow the learner to consider how confident they would be applying what they had learned from the content that had been most useful to them.  Again, the comments would give an indication as to what we might expect to see in terms of a Level 3 improvement (and possibly Level 4).

The final question in my “trinity” was the standard net promoter score question:

“Would you recommend this training to a colleague and what would we need to change for you to rate it a 10?”

Regardless of what a learner thinks about aspects of some training they’ve received, ultimately we want them to endorse it by recommending it to others.  For me – and for the world of marketing, sales and customer service – that is a key goal.  Due to the difficulties of determining a benchmark  NPS score for training, we tended to just report on the different ranges, rather than trying to come up with a single figure that, if I’m honest, most stakeholders struggled to interpret.

Again, the comments box provided the learner with their opportunity to highlight the things that they would like to see changed.  With old-style questionnaires – where we ask about lots of different variables from the quality of the trainer, through the usefulness of the materials, to the temperature  of the training room and lunch buffet – I sometimes feel we are leading the learner to comment on things that wouldn’t have ordinarily concerned them.  We then spend time digesting more feedback than we can handle and possibly making changes that were probably not really necessary.

As to the follow-up surveys, I amended the wording slightly:

“How much of the training have you been able to apply back on the job and give at least one reason for your answer?”

“How successful have you been in applying what you’ve learned back on the job and give at least once reason for your answer?”

With the benefit of time (and hindsight), learners will tell us how much of the training actually turned out to be of value and, through their comments to the second question, hopefully provide us with some examples of performance improvements that we can share with the stakeholders.  Comparing the immediate reaction and follow-up responses might also highlight other areas for improvement to the training itself.

I didn’t ask the NPS score a second time – one less question for the responder to worry about – but this might still be a good question to ask.

Hopefully these questions will encourage you to think about what you ask your learners.  For me, being able to quickly report on the predicted and then actual impact of the training has been a good approach.  And with higher completion rates, having more such qualitative data has increased the value of that feedback.

 

Advertisements
Creating Simple but Valuable Training Evaluations

One thought on “Creating Simple but Valuable Training Evaluations

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s