So, you’re investing your time and resources creating a top-notch training program for your organization, and now you’re ready to know if it works. When it comes to training metrics, calculating people’s knowledge can feel like a complex and ephemeral task. A vast difference lies between knowing details for a moment (hello, test cramming) and genuinely being able to apply that knowledge in the future (the point of training). So, how can you make sure that your training program creates valuable results?
The metrics you focus on should depend on the training’s learning objectives and business goals — and the resources you dedicate to the process. Who’s in the training, what they need to do with the information they learn, and how much time and money you dedicate to the program all may affect the training metrics you use to measure success.
To get good data, you have to customize your analysis. For instance, annual accounting update training for corporate leaders will have different benchmarks than a weeks-long onboarding program for new accountants. But with the right combination of tools — and thoroughness in the process — training metrics can give you the answers and insight you need.
Four Tips for Building Training Metrics That Matter
1. Check Knowledge Before and After a Course
Testing is one of the most common tools for measuring a training program’s success for a reason: You gain a clear picture of the specific knowledge attendees gain during a session. But to get good data, you have to ask the right questions and tie each of them back to your learning objectives. We believe that pass/fail testing is the best way to understand exactly what people learn in any given session. Not only that, we also find that it is most effective when administered shortly after the training session – either immediately following an eLearning module or within a few days of a live training.
2. Choose the Right Bloom’s Taxonomy Levels
According to Bloom’s Cognitive Categories of Learning, we have six distinct levels of understanding, ranging from Knowledge to Evaluation. Before you design or test a course, determine how deeply attendees need to absorb each learning objective, and then tailor your teaching and testing to that goal. Within one course, you may have multiple levels of Bloom’s Taxonomy, e.g., attendees need to be at an Analysis level for the new revenue recognition standard but only a Comprehension level for foreign currency transactions, because the company enters into very few FX transactions. When designing the test to determine how much attendees learn, make sure you tailor questions to the Bloom level you chose for each topic.
3. Ask for Feedback Immediately
Attendee feedback can provide powerful training metrics, but the longer you wait to gather answers, the less insight you’ll receive. Whenever possible, distribute and collect evaluations before learners leave the classroom. By doing so, you’ll gather people’s perspectives while still fresh in their minds —and get feedback from everyone, not just the extremely pleased or displeased. You can always collect a second round of more in-depth feedback later on, but these initial evaluations will provide a big-picture view of what’s working and what needs work.
4. Apply the Knowledge Quickly and Repeatedly
If you want your training program to drive change, this step is the most important. Attendees may learn the right info, test well and provide great feedback, but if they don’t have opportunities to apply the knowledge, it’ll quickly fade away. Think of training like learning a foreign language: Regular, ongoing practice is the key to lasting skills. For learning that truly sticks, we recommend that you:
- Have attendees apply knowledge within two weeks,
- Reinforce the information through additional education within a few months and
- Revisit the topic again a year later.
Ultimately, there isn’t any quick calculation or single test that can determine your training program’s value. But, by combining quantitative and qualitative training metrics, you can build a robust view of what’s working and what to improve, so you can focus on continually improving — just like the knowledge growth you expect from your employees.