Scenario 1:
New hires have been trained for weeks on a new product being launched in the marketplace. They have been provided product-specific clinical data, promotional resources, competitive knowledge and solid understanding of their customer segments. They have also been trained on the ability to manage customer issues, questions and objections, so that they can deliver a compelling and compliant sell.
Scenario 2:
Or in another scenario, the marketing team outlines a new campaign with updated key messages and latest clinical data. These are explained over a few days in a live POA meeting to the entire sales team.
How certain are you that the knowledge so imparted is going to be retained by the salesforce and put to effective use over a period of time?
How do you ascertain whether the training provided has been effective when your salesforce is all set to hit the market?
They are usually assessed using a single-score quiz with multiple choice questions. However, this approach may have some limitations, namely:
- Scope for guesswork, thereby skewing the results
- Assessments not engaging enough
- No enhancement of the learning experience
- Lack of learning retention
- No measurement of the confidence level of the learner of their newly gained knowledge
- Lack of assessment of an individual at regular intervals
Worst of all, as research has shown, almost 90 percent of the knowledge is lost within three months of a learning event!
So, is there a better way to assess training effectiveness and ensure knowledge retention? Confidence-based assessments coupled with Spaced Learning can be an answer.
Confidence-Based Assessements & Spaced Learning
There is a body of research that suggests that spacing learning over time helps people improve learning and remember better. Learners are presented with a concept or learning objective, and after a period of time they are presented the same concept again. This might involve a few repetitions, or many, depending on how complex the content is. Sometimes a spaced approach will be appropriate as follow-up to a one-off event to minimize forgetting after that event.
Confidence-based assessments measures the correctness of a learner’s knowledge and confidence in that knowledge. It is designed to increase retention and identify and minimize guessing, which can skew the results of traditional single-score assessments. It distinguishes between what individuals think they know and what they actually know.
Jon Rosewell, senior lecturer in information and communication technologies at The Open University, UK, states:
“A learner who has low certainty about their own knowledge (even if correct) is not able to act effectively on that knowledge, and a student who mistakenly reports high certainty risks making mistakes if they take action based on incorrect understanding.
Trials have shown that students do report their certainty in a way that reflects their underlying understanding, for example in large studies undertaken in a medical science context, particularly where confidence-based questions are presented as a formal part of overall course assessment, the confidence-based marks are tightly correlated to a simple measure of accuracy (the percentage of questions which are answered correctly). However, the confidence-based evaluation reveals that students are accurately assessing their own certainty on individual questions because their marks overall are higher than if they had set their certainty at random. The technique therefore gives the teacher a more accurate and realistic judgement of the student’s knowledge.
CBM has had niche success in the past in the context of medical training where assessment of competency and mastery is expected.”
A simple 2×2 matrix of knowledge versus confidence can be plotted (Source: https://en.wikipedia.org/wiki/Confidence-based_learning). Called the “Learning Behavior Model” the quadrants and their associated learner behaviors are as follows:
Misinformation
Knowledge a learner confidently believes to be correct, but which is actually incorrect. Those who have confidence in wrong information (misinformation) will very likely make mistakes on the job, which puts companies at the most risk.
Mastery
Knowledge a learner knows confidently that is correct, and which will likely be applied correctly in practice. Learners who have correct knowledge and a high degree of confidence in their knowledge (mastery) are masters of that knowledge domain. These learners are likely to act and act correctly, resulting in higher performing and more productive learners who make fewer mistakes.
Doubt
Knowledge a learner believes to be correct, but an element of doubt exists that may cause the learner not to act on that knowledge. Someone who harbors doubt may be correct on a certification, but is likely to act with hesitation or not act at all.
Uninformed
Knowledge that a learner has not acquired yet. Someone who is uninformed is unlikely to act, which can result in a state of paralysis.
How Can One Develop Confidence-Based Assessments?
There are multiple ways that confidence can be measured while developing assessments:
- The time taken to respond after reading the question
- Putting bets or wager on the “right” answer. Higher the wager, higher could be the correlation with the confidence in the right answer
- Simply asking “how confident are you in the right answer?”
Irrespective of the approach, confidence-based assessments can offer a contextually-smart learning environment. The success of this approach lies in developing an individualized learning plan for the learner based on where they lie on the learning behavior model. The prescriptive plan includes feedback on the learner’s performance and the learning content needed to fill knowledge gaps. Since the process is based on an individual’s knowledge and confidence, the number of learning sessions needed to achieve mastery for a module will vary per learner.
Some companies are adding a level of gamification on top of the confidence-based assessments to enhance the learning experience as well as learner engagement. Hence the user experience is customized towards a game-based environment complete with points scored, badges earned, levels unlocked and a ‘live’ leaderboard to generate competition amongst peers.
In their paper titled “Certainty-Based Marking (CBM) for Reflective Learning and Proper Knowledge Assessment”, Tony Gardner-Medwin, emeritus professor, University College London and Nancy Curtin, emeritus professor, Imperial College London state:
“We certainly don’t advocate computer-marked tests, even with CBM, as an ideal or sole form of assessment. But in large classes, especially where there is critical core material as in medicine, there is no option but to use them as a substantial component of assessment, and particularly of self-assessment to support learning. We must use them in the best possible way.”
One of the feedback against confidence-based assessment is that it leads to gender bias. This is based on the belief that it does not favor diffident or risk-averse personalities, supposedly a common trait among females. Even though an effective learning tool eventually boosts self-confidence, confidence-based assessment encourages learners to think more and uncover points of weakness and also to distinguish between sound and weak conclusions. This leads to learners realizing that sound knowledge cannot be based on hunches, which in turn helps learners build their self-confidence.
In conclusion, a platform that brings together aspects of confidence-based assessment, spaced learning, and gamification can drive confidence, retention and ultimately learning.
Vishal Makhija is manager, client services for Indegene. Email Vishal at vishal.makhija@indegene.com.