Tracking is not Learning

SCORM and the LMS are the Achilles Heel of training. Tracking data has become synonymous with measurement. This week I got an email from a vendor promoting the tracking capabilities of the product. It made realize how often tracking data is used to misrepresent training success.

Many people think that metrics pulled out of an LMS indicate the success of training programs. They see tracking metrics as “performance measurement”. Tracking is not measurement and is no indication that learning took place, or that learning will transfer to job performance. Relying on tracking data shows our collective weakness in measuring training effectiveness.

Reliance on tracking data also means we aren’t asking the right questions about our training programs. How many times are you asked the following questions about training:

  • How many learners completed the eLearning course?
  • How many people attended training?

These are important stats, but they have little, if any, correlation to job performance. Just look at how effective the Secret Service ethics training was prior to the debacle in Columbia. Everyone completed training, but it was clearly not effective.

Reliance on tracking data also means instructional designers aren’t being honest about real measurement and effectiveness. It’s easy to hide behind tracking numbers, and often those numbers can give the impression that training programs are effective and valuable. It certainly sounds better to say that 90% of learners completed training than to say you have no idea whether or not they learned anything.

Yes, we can design training courses to measure learning through activities and assessments (not just quizzes, real assessment). In those cases, the tracking metrics do provide value. If the course has rigorous assessment, then your completion stat is an indicator of learning. But how many courses have you developed that truly assess learning? How many have you taken online or attended? Instructional designers are often between a rock and hard place – there is an expectation from management (or customers) that people complete training, so we are under pressure to ensure they do. It’s our job as course developers to make sure that completion is not the most important metric.

And we won’t necessarily find the answer in Kirkpatrick’s Level 3 evaluation. That’s a great idea, but impractical in many organizations. If you can do Level 3 evaluations, then do them. We really need to look more seriously at the types of integrated assessments we do in training and how learners can measure their own success. We can’t be afraid to let people fail the assessment, and shouldn’t punish those that do.

But instructional designers and course developers need to start at the beginning and ensure that the people asking for reports on training success understand what the data means. We also have to ask the right questions before we start developing training:

  • Why is this training important to the organization.
  • What criteria will be used by management to determine success.

If the answer to the second question is something like “everyone will complete the training” then you better go back to the first question and dig deeper into the problem.

In the long run, reliance on tracking data and lack of learning assessment will come back to bite us. If you figure out quickly that your training isn’t effective, then you can make adjustments before it’s too late. Achilles was a mighty warrior, but in the end he was defeated by his one vulnerability. Don’t let assessment and measurement be yours.

5 Responses to Tracking is not Learning

  1. Dan Knudson says:

    I couldn’t agree more, well said. It seems all too often that the only measurement that is tracked is overall use and or attendance. I recently spoke at a conference and hammered home the fact that if you do not start with a measurable business goal, one that is derived from a need, your training will go no where. But if you identify the need, and ask, how do you know you need this, what data is telling you that this is where you are…then you can use that same data to say after training, this is where you are….now.

    • Dan, Great point about getting data up front that demonstrates the need for training. Essentially you can use the same measurement strategy to determine the effectiveness of your training.

  2. karen mahon says:

    Reblogged this on disrupt learning! and commented:
    This is a great post addressing measurement and instructional design….check it out and see what you think!

  3. Very helpful info! Every e-learning developer needs to abide by this.

  4. Larissa Zando says:

    Great blog and I agree that the there is a reliance on tracking data and that we need to focus on the needs and assess if learning took place. Even if the designer focuses on the Kirkpatrick’s Four Levels of Evaluation the Level 1 reaction surveys and level 2 assessments are poorly designed and written. From my experience they are usually not learner-centered but instructor-centered and the learning usually measures recall not application. Can you share any tips that will help us choose the appropriate activities and assessments to measure learning?

    I will keep in mind the two questions in mind and to ask them before I start designing and developing training courses. Thank you.

    Larissa
    Roosevelt University Student
    Blog: rutraining.org

Leave a comment