Let's Get Real About Corporate Learning Evaluation

Let's Get Real About Corporate Learning Evaluation
UfaBizPhoto/Shutterstock.com
Summary: This article builds on perspectives outlined by Dr. Will Thalheimer of Work-Learning Research in a blog post titled, "Evaluating Learning for Your Business Stakeholders - Warning!!". It clarifies misconceptions about learning measurement and suggests an alternate path forward.

Corporate Learning Evaluation: How To Clarify Common Misconceptions

In his article, “Evaluating Learning for Your Business Stakeholders. Warning!!” Dr. Will Thalheimer calls out a dangerous dynamic that permeates the learning industry, both as it applies to evaluation ideology, and the field in general: The tendency to view everything through the lens of binaries. The article opens with the following anecdote:

At a recent industry conference, a speaker, offering their expertise on learning evaluation, said this: “As a discipline, we must look at the metrics that really matter… not to us but to the business we serve”.

Unfortunately, this is one of the most counterproductive memes in learning evaluation. It is counterproductive because it throws our profession under the bus. In this telling, we have no professional principles, no standards, no foundational ethics. We are servants, cleaning the floors the way we are instructed to clean them, even if we know a better way.

As Dr. Thalheimer correctly identifies, the speaker’s statement frames a false choice for learning practitioners, one that devalues functional expertise in order to meet the supposed expectations of stakeholders. This tendency toward self-immolation seems common in training and development, and ironically, rears its ugly head when practitioners bemoan their role as “order-takers” within their organization.

To address this unhelpful dynamic, learning professionals should value the skills and expertise they bring to the table which includes advocating for practices that drive results and reflect well-vetted, proven strategies. Note, adopting this approach in no way suggests one should discount the goals of the business stakeholder; they are after all, the customer. Rather, learning professionals should build partnerships in which they help business partners accomplish desired goals in the most effective way possible.

As it applies to evaluation and measurement, this entails capturing metrics that both assist in creating more effective learning deliverables, as well as examining how a learning intervention impacts a financial result. The advent of xAPI has enhanced observation capabilities to assist in the former, while most organizations continue to struggle when attempting to prove the latter as it requires highly-specialized capabilities in process and statistical analysis. Despite claims to the contrary by unscrupulous vendors, no magic AI solution or algorithm can assess training impact. It takes proper experiment design, isolation techniques, and the availability of the right metrics to draw reliable conclusions. Perhaps no “business measure” is more over-emphasized and abused than correlation metrics. “Hey look! Our training correlates with an increase in sales!” (Never mind that the organization concurrently launched a new ad campaign and implemented temporary promotional pricing on a highly desirable product.)

If evaluation requires such work, why bother? Is it even necessary? More than a few learning professionals balk at the notion that corporate learning teams are not charged with creating learning for the sake of driving knowledge. “I know this training is great! Everyone knows something they didn’t before!” That’s not to say the pursuit of knowledge for knowledge sake can’t be a worthwhile endeavor. Colleges and universities all over the world foster curiosity and serve this exact need. However, building knowledge, or less concretely, “understanding,” does not resonate with leadership, business stakeholders, shareholders, or even the external customer. In a corporate environment, training that does not create some form of tangible value through its application to the business is an irresponsible use of the precious budget dollars for which learning professionals fight so desperately.

In answering the question, “What are our (learning professionals) most important decisions,” Dr. Thalheimer echoes this statement when he identifies the following questions of interest:

  • Which after-training supports are helpful in enabling learning to be transferred and utilized by employees in their work?
  • Which supports should be kept?
  • Which need to be modified or discarded?

Notice that while Dr. Thalheimer classifies these goals as pertaining to the learning professional specifically, they have huge implications for organizational leadership as well: They reflect a commitment to be a good steward of resources by promoting what works, and modifying/abandoning what doesn’t. And in order to answer these questions, learning professionals must examine metrics about the deliverable itself as well as its impact, or lack thereof, on the business.

Does this mean reviewing every piece of data about a participant’s experience navigating an eLearning, simulation, or instructor-led course? No, some metrics simply do not much matter; quality over quantity is key. “250 employees completed this training,” doesn’t reveal much, but identifying that learners spent three times as long on slide eight’s scenario-based activity than the slides that proceeded or followed it, offers potentially-valuable insight about learner engagement:

  • Was the activity reflective of reality, and consequently more interesting?
  • Was the activity too difficult?
  • Was the design of the activity different in some way, making it challenging to navigate?
  • Did the media elements in the scenario run inefficiently, constantly freezing and trapping the learner on the slide?

Insights such as these support the creation of more effective solutions and they can lead to better training development practices.

What about the coveted business impact metrics? Should learning professionals try to arrive at a ROI all the time? No, the gap driving the training should justify this difficult and highly-specialized process. Specifically,

  • Is the training supporting a core competency?
  • Is the audience for the training large, and thus connected to huge opportunity cost?
  • Are baselines to measure change available?
  • Do metrics related to the topic at hand even exist, and if so, where?
  • What’s the risk if the learners don’t improve?

These nuances and others should drive decision-making, and as Dr. Thalheimer points out, abdicating responsibility to share these insights with business partners only creates false hope and devalues the profession.

As long as learning professionals approach their trade with intellectual integrity and confidence, they can build relationships with the business partners they support. Moreover, the business partners will likely develop a greater appreciation for L&D when they see the thought and expertise required to execute the job at the highest level. First though, the learning industry must grow comfortable with their role within organizations and display the courage their craft deserves.