Amazon…TripAdvisor…ao.com: all sites that use the now familiar five-star rating system for recording satisfaction.
This approach to recording how much we like something has well and truly entered the learning space too and the vast majority of learning management systems (LMSs) now offer learners the chance to rate content this way. It’s quick – no one really has the time to complete a formal training evaluation survey – and as a measure of the learner’s gut-feel, it doesn’t require much reflection and mental processing. It’s also easy to analyse and very simple to present in reports and provides the stakeholders with an instant barometer of a training programme’s success.
So looked at pragmatically – with the never-ending focus on evaluation – using a five-star rating makes a lot of sense. Or does it?
What exactly are you rating?
Let’s look at some scenarios.
|What’s being rated?||What is the rating based on?|
|A fiction book on Amazon||
|A non-fiction book on Amazon||
|A gadget on Amazon||
|The latest blockbuster film||
|A West End musical show||
|A training course||
…and maybe…OK I’m pushing it…
So in the case of a training course, it’s likely that there will be many factors that will influence the final rating. I might subconsciously award one star for each of the five factors that are important to me. But as the L&D professional reviewing the scores, I have no real idea what the contributing factors were. I’m sure that my stakeholders will be only too delighted to see lots of “fours” and “fives” and you could argue that “fours” and “fives” should mean we just about got it right for most people. Were we to see lots of “twos” and threes “ (or the dreaded “one”), however, then the conversation will turn to “why?” and we won’t have the answers…unless we’ve asked for and been provided with supporting comments.
How often have you read the comments posted by a reviewer, only to then question why they rated the item a 5/5 when they highlighted half a dozen issues?
Or what about when the reviewer begins his comment by saying that he would actually give the item a 3.5 if he’d been allowed to use half-ratings?
Clearly ratings coupled with comments are better than ratings alone; and asking for a rating in addition to a comment helps to assess the impact the written feedback had on the overall impression of the item by the reviewer.
But asking people to record comments isn’t easy. How many of us are happy to provide Amazon with a one-click star review, but shy away from entering any supporting commentary? And if we’re being forced to enter a comment, many of us choose not to respond at all.
There doesn’t seem to be simple solution here. One idea might be to ask learners to rate say four or five different factors with an individual star-rating. This probably requires a slightly lower mental load than working with a Likert scale, but it still adds some complexity to the process.
We shouldn’t reject the notion of ratings outright though. Learners find ratings useful as if you include them in course descriptions in your LMS, then the best content can filter to the top of search results and provide some reassurance that others have rated the course highly, for whatever reason was in their heads at the time.
So if ratings can raise more questions than they answer, how about recommendations?
Social media platforms regularly use the “thumbs up” and “thumbs down” as a quick way for the reader to “rate” a posting, be that a piece of text or a picture. Again, this is a quick way of providing feedback, but still doesn’t answer the question “why?”. And sometimes it’s not always easy for the reader to decide whether a “thumbs up” up or “thumbs down” is entirely appropriate. For instance – to cite an example I heard recently – when an announcement was made about the death of a colleague, people were unsure whether a “thumbs up” would represent a resounding tribute to the impact that person had had on their working lives, or…well… (best not said).
And how comfortable will learners feel acting like a Roman Emperor, determining the fate of their training department by the rotation of their thumb? At least a five-star rating allows some degree of freedom. If you’re responding to the discussion forum posting of a colleague, for example, would you feel comfortable giving it a “thumbs down”, if you might meet them later by the coffee machine? YouTube and public groups on Facebook are one thing, but an in-company learning experience is something different; and it’s generally not current practice to ask for supporting comments when using recommendations.
That said, if you use the “thumb up” to represent a “thanks” and not a “like”, then that’s an interesting and maybe more valuable piece of feedback as we want our training or discussion forum posting to have a positive impact on the learner or reader. But does that then mean a “thumbs down” meant it wasn’t useful?
If you want to use recommendations, then many of us have turned to the so-called net promoter score measure that’s frequently used in marketing and customer service settings, i.e. on a scale of 0 to 10, would you recommend this to a friend? Mind you, the official method of calculating and interpreting the end result can be tricky – and best suited to its original intended purpose – so many L&D teams use a different (and therefore strictly incorrect) method of analysis.
And what about the traditional evaluation form?
One of the reasons we’ve moved to using star ratings and recommendations is to counter the typically poor response rates to the traditional evaluation form. Learners have either been put off by the large number of questions we ask them to complete or have struggled to even find the evaluation form in the first place, if it’s been buried deep in the workings of the LMS.
They may not even feel they can yet answer some of the questions as they might need some time to reflect on what they’ve learned. In my posting “If Your LMS Was Like Amazon”, I mentioned that I liked how Amazon at least gave you some time to read the book or use the gadget, before it asked for your thoughts.
But the evaluation form still remains one of the best sources of feedback and allows us to ask those questions that a star rating system can’t reflect. It would seem that we need the best of both worlds: as many people as possible giving us their gut-feel rating, with a significant number of these also going on to complete the evaluation form. We can then report on the overall perception of the training – easy to do in reports and “learning dashboards” – with more qualitative data for those involved in the design and development of the training that they can then translate into a commentary for the key stakeholders.
Ratings and recommendations are not going away fast, unless the likes of Facebook and Twitter come up with something different, in which case….watch this space!