Estimated reading time: 3 minutes
The goal of a training session is for employees to learn the content. So, it’s important to have a mechanism in place that confirms such. If the organization is using Kirkpatrick’s Levels of Evaluation, this is referred to as level two evaluation (learning). Here’s a quick reminder about the Kirkpatrick Model’s four levels:
- REACTION: In the reaction level, participants are asked for their “reaction” to training. Often this is done with a paper or online course evaluation. The training team can review reactions and make updates to training based on employee feedback.
- LEARNING: In this level, participants are often given a pre- and post-training test. Ideally, the participant scores in post-training should be better to indicate that “learning” took place. But the training team can also review scores to spot trends – like every participant got one question wrong or that aggregate scores weren’t as high as expected.
- BEHAVIOR: The behavior level of evaluation says that participants are doing what they learned in training. For example, let’s say a company has a decision-making class where everyone learns how to create a force field analysis. After the training, senior management sees employees using force fields. That’s a sign that training was impactful.
- RESULTS: In the fourth level of the Kirkpatrick model, the organization sees a measurable outcome as a “result” of training. An example would be if the organization was seeing errors in a particular assembly, and they decide to conduct training. Ideally, after training, the errors should be reduced.
Today, I want to focus on the level two evaluation – learning. Specifically, what kind of testing options are available. Here are four options:
- Pencil / Paper: Let’s talk about this one first because it’s probably the thing that initially comes to mind when we say “test”. And there’s nothing wrong with having participants take a short quiz at the beginning and again at the end of training. The test can be multiple choice, true / false, or fill-in the blanks.
- Participation: Depending on the topic, it’s possible that training instructors might be asked to provide feedback about whether a participant meets the criteria for completing the training. The “test” might be an essay or short answer where the instructor evaluates the participants response for accuracy.
- Verbal: Participants are asked to talk-through the steps to complete a task, process, or procedure. It’s possible that the training instructor or subject matter experts might ask follow-up questions to “test” the participant’s knowledge on the subject.
- Skills: Participants are asked to physically complete a task. This allows the training instructor to see that the participant can perform the skill they just learned. Often, participants are also asked to explain what they’re doing at the same time to show they understand the steps.
Each of these four testing options has advantages and challenges in terms of design, cost, time, and performance measurements. The last thing that training designers want to do is design a test that takes too long, costs too much to administer, or that no one can successfully pass. I’m not implying that tests need to be super easy either. They should be at the level necessary to show mastery of the subject.
That’s really what this is about. When employees go to training, organizations want to know that employees learned what they were supposed to learn. So, how does that happen? With a test. The good news is that there’s more than one way to test and confirm learning took place.
Image captured by Sharlyn Lauby while exploring the streets of Washington, DC




