If Learners Were Patients, How Would You Measure Our Success? By Visits?

Learning Professionals
VectorMine/Shutterstock.com
Summary: What if the treatment generally works but not on the patient? Success or failure?

True Story

Once I had some allergy-like symptoms and my doctor sent me to see an ENT (ear, nose, and throat specialist). It was my first time to see such a person. They put me in a half-lit room to sit in a chair. After a while, the ENT swung in. He checked my nose and throat with a camera. Said something along the lines of "I don't see any problems," and left. I was there sitting in the dark room wondering if this was it or if he's coming back to ask me some questions...Fifteen minutes later, a nurse saw me sitting there and asked what I was doing in the chair. I explained I had no idea what's next. Apparently, nothing. I was free to go. So, this ENT just swung in, determined from his perspective that I had no problems, and left. I knew I had problems, otherwise, I would not have gone to see an ENT...I don't blame him. He probably had many patients to attend to and no time to squander. He did what he was good at: checking my nose and throat.

Sometimes, we, learning professionals act like ENTs. We don't have time for patients because there's so much to do. We're called in at the last second to do what we're good at: learning design. We deliver and then we leave. For us, the number of visits completed shows that we're busy delivering. But what if the patient dies? Is it our fault?

What's The Role Of A Learning Professional?

Are we responsible for the ear, nose, and throat at the moment or the whole person with a history and future? How often are we wasting our audience's time? If we take the SME's content and make a meaningful learning experience out of it, are we doing a good job? Or should we also be responsible for what happens after the learning experience? What if you know that the content is great but it is irrelevant for some of the audiences? Should we care to follow up? Influence?

This article explores these, and other questions, starting with a common phrase people hear about promoted/mandated training: "You must take this training because it's mandated for everyone."

Have you ever taken a course or attended an ILT, webinar, or any sort of "professional development" event that you thought was a waste of time for you? Would you hold the learning designer responsible for the waste of time if they only cared about the quality of the learning material? If it's a live VILT, would you expect the facilitators to deliver irrelevant content?

"You Must Take This Course Because Everyone Needs To Take It"

Once I had to attend a 7-week long certification for online learning design and facilitation skills. It was supposed to be my professional development, according to our leadership. Not only did I learn nothing, but this event also ate up the budget for other things I would have had appreciated. The certification started out with an hour-long introduction to the online platform. I used that platform for years to deliver highly interactive webinars. When I asked why I needed to take this intro, I was told because "everyone needs to take it."

How To Kill Engagement And Motivation?

One of the well-known frameworks addressing engagement and motivation is the self-determination theory.

Self-determination theory (SDT) represents a broad framework for the study of human motivation and personality. SDT articulates a meta-theory for framing motivational studies, a formal theory that defines intrinsic and varied extrinsic sources of motivation, and a description of the respective roles of intrinsic and types of extrinsic motivation in cognitive and social development and in individual differences [1].

There are 3 major components of SDT: autonomy, competence, and relatedness. When all three are present in a well-designed experience, engagement and motivation are more likely to occur. When one or more of these are missing, the opposite happens:

  1. Given no choice but to attend this intro takes away your autonomy;
  2. The fact that you can't prove you already have the knowledge and skills diminishes your competence; and
  3. With the two above gone, your enthusiasm to feel like you belong, the drive to be part of something bigger than you are, turns into passive-aggressiveness.

Faulty Logic: "You Need To Complete This Intro Because Everyone Needs To"

What's the problem with this logic? "You" are part of everyone. And since everyone needs to take it, you must take it as well. Sounds legit. However, it relies on an assumption that everyone does need to take the course. So, the question is, why does everyone need to take this course? Because it's fundamental that for the rest of the seven weeks all the participants have the skills to use the online platform.

This is one of the deadly traps learning professionals often fall into with Subject Matter Experts and stakeholders. Do you see the trap?

Mixing Intention And Execution

Intention and execution are two different concepts. The intention is good: everyone should have the skills to manage the online platform. Therefore, they created this intro that "covers" all the fundamentals about the platform. That is the execution of the intention. If you question the execution, in the designers' eyes, you're questioning the intention.

But, there's a problem! Forcing people to complete an intro is not the only way to execute. In fact, one can argue that it is not an effective way to execute. What's missing here big time is measurement and evaluation!

Measurement And Evaluation

How do you know that someone has the important skills needed? If they are so fundamental, you should have a way of measuring and evaluating someone's skills. Completing an intro where you're told to use a feature such as whiteboarding is not an effective way to measure and evaluate facilitation or learning skills. If these skills are so fundamental, why don't you have an opt-out test where I can show you I already have them?

By the way, the certification ended with a final project we had to submit. That's a good approach—real application of knowledge and skills. However, I could have passed that submission test without taking any of the 7-week course content. Imagine how much I could have accomplished if I had actually learned something for 7 weeks.

Whose fault is it? The learning designers of the 7-week course? The facilitators who delivered the course? The course was well-designed for those completely new to online learning design and facilitation.

What would you do? Let's say you're in charge of designing this 7-week course. You even have an SME to provide content. If you knew that the course would be mandatory for any participant, no matter what their current skills are, would you speak up?

We often have the responsibility without authority to design and deliver learning. But I believe it is also our responsibility to raise important issues with our stakeholders, provide more effective alternatives, and let them make informed decisions. Our customers are not SMEs or stakeholders. Our customers are those who will take go through this learning experience to grow their knowledge and skills so they can do their job better, faster, or easier. Their career is at stake. We are responsible for human lives, not only for screens, drag-and-drops, and completions.

2 Factors You Should Look Out For:

  1. If something is so important that your stakeholders believe employees must have the associated knowledge or skills, the first thing they will need help to decide is how they will measure it and what is the minimum level employees need to achieve (some plans call these competency levels).
  2. When everyone gets trained on the same content, no one gets trained. Relevancy is one of the top barriers to effective learning impact. In fact, I would argue that relevancy is one of the top factors of failure in workplace learning. Two of the major components of relevancy can be the culprit:
    • Timing
      Is the knowledge or skill relevant to the audience now? If not now, how long until it becomes relevant to their job? I've seen process and application training fail over and over again because they were scheduled ahead of time, way ahead of time.
    • Role and personalization
      Are the knowledge and skill relevant to each individual employee? If you do not answer the following personalized question in your training, it is not likely to make an impact: "What do you want me to do differently?"

Ignoring these two points above may result in a check-mark training event without any impact on the job. Yes, it is logistically more convenient to force "everyone" to attend a webinar or take an eLearning but it can easily backfire.

What Is Measured (And Evaluated) Gets Done

The Learning Guild's report, "Evaluating Learning: Insights from Learning Professionals" by Will Thalheimer [2],  indicates that 95.7% of responders are doing evaluation themselves as a learning organization without any external help. Does this high number mean we're so good that we don't need anyone else? Or that we're measuring what we can within our reach?

The conclusion of the reports inclines toward the latter:

Our survey results showed that the most common method of “evaluating” learning is measuring attendance and completion rates. Unfortunately, learners can attend and complete learning but still not learn. For this reason, we really ought to stop reporting these numbers.

Are we happy with this effort? Should we be? According to the report, only 60% of us are satisfied with measurement and evaluation. Those people have something in common:

Overall, it seems that people who are happiest are (1) doing some form of evaluation, (2) going beyond learner surveys, (3) getting data that is actionable, and/or (4) satisfying organizational stakeholders with results on impact, whether that is work performance or organizational results.

For the rest of the 40%, we have work to do. But what if we don't have the tools, technology, time, support, etc.? Shouldn't we just focus on the content and do our best learning design instead?

Here are a couple of things I've learned in 20+ years about measurement and evaluation strategy:

  1. If you don't plan to measure and evaluate the outcome, you can still build nice PowerPoint presentations for your L&D team.
  2. You can simply turn a failed project into a successful one just by changing the measurement and evaluation criteria at the end.
  3. Measurement and evaluation work backward: you need to understand the business goals, performance goals, and KPIs (Key Performance Indicators) before you can create your strategy, otherwise, you can execute the best design that makes zero impact on the job.
  4. If you only propose to measure what you know you can measure, you may just keep counting completions forever. Define what would be on your scorecard first, then collaborate with others in the organization to find out how it can be measured. Sometimes you will end up with creative proxies (items that represent concepts you can't directly measure such as engagement) but at least, you all speak the same language.
  5. And finally, this is really intriguing, just by having a measurement and evaluation strategy with your stakeholders can clarify the desired outcome and define the scope—even if you can't execute the strategy. It forces everyone to be on the same page in terms of goals.

Conclusion

We may not have the authority to change the world of learning and performance, but we certainly have the responsibility to start raising issues and start asking the right questions. You may not have all the tools, but you certainly can use your mind to shift thinking. You may not be able to do everything you want, but you certainly can do one thing that can trigger change.

"Start where you are. Use what you have. Do what you can." – Arthur Ashe

This article raised a lot of questions. Write down three questions that resonate with you and discuss them with your team. If nothing else, start asking "why" five times in a row and start digging [3].

References:

[1] Theory 

[2] Evaluating Learning: Insights from Learning Professionals

[3] 5 Whys: The Ultimate Root Cause Analysis Tool