3 min read

Evaluating a Virtual Instructional Program

Evaluating a Virtual Instructional Program

072020 Evaluating a Virtual Instructional ProgramSummary: As it turns out, evaluating a virtual training program is just like evaluating any other instructional program. The challenges lie in what you measure and how you interpret the results. The good news is that current hybrid virtual classroom platforms give you a variety of tools you can use to assess the effectiveness of your instructional delivery and its effect on learners.

Anyone familiar with ADDIE can tell you that the last step (the “E”) is evaluation. The challenges that people misinterpret what that step means-not seeking to evaluate the learner, but rather the learning program. Effective program evaluation requires careful planning prior to design to identify key data measures and means to collect the data prior to implementation.

Ever since Level One effective feedback instrumentation became the norm (often termed “level I evaluations”, or more colloquially “smile sheets”), trainers and training program managers have been struggling with what to do with the data they collected. This has led to some rather creative ways of manipulating and displaying Likert scale data to program stakeholders in an effort to show the quality of the program. The problem is not with Likert scale data, but with what the people analyzing that data are attempting to show using it.

At the end of the day, an effective instrument is about how the learner feels about an instructional experience and its various aspects. This may sound draconian, but you can likely think of a dozen highly regulated training contexts where program sponsors have little interest in whether the learner has a positive learning experience, but extraordinary interest in whether behaviors or organizational performance changes. Using effective data to try answering that question in the absence of any other information is largely baseless at best, and at worst can lead to improper decision making. The key is to identify what data does exist that can address the specific desired outcomes of a program before you implement - that brings us to program evaluation and data identification.

For instructional program managers, the program evaluation framework perhaps best aligned to this need is the context, input, process and product model (Stufflebeam, 1968) coupled with some stakeholder inputs that seeks to provide meaningful information to decision-makers regarding the efficacy of a training program in a timely and reliable fashion. The point of this blog is not to preach ignoring the feedback from learners - to the contrary, learners provide very effective insight into the quality and efficacy of training - but you have to ask the right questions and measure the right things. In the virtual classroom, measures of effect are often available automatically (attendance, participation rates, login/logout, technology issues, accommodation needs/uses, etc.) but it does no one any good to simply collect that data and hand it to a decision-maker (“look what we did!”). The designer/developer needs to determine a priori and identify how the data will be collected, analyzed, and reported, and (this is the important part) how that data is related to the desired outcome of the instructional program. It is best practice to also brief the decision-maker with this data plan before delivery and get buy-in so that everyone understands what the data signifies and its limitations.

Often times, to determine the efficacy of a training program, we may have to look outside the training program at data that is associated with the desired outcomes (error rates, or productivity measures, for example) but training assets can also provide great insight (microlearning assets, as an example, can provide measures on how often they are referenced to give insight into utilization and process compliance). The common wisdom here is that you don’t need a causal relationship between training and the behavior data you’re measuring - it is simply enough to see whether the needle is moving pre - and post - intervention. Coupling that data with meaningful information from the instructional delivery can provide a much richer and more accurate picture of the learner and the learner’s trajectory after being trained.

Are you providing a virtually excellent experience for your training team and learners?

Let our team of experts assess the design and delivery of your content and create a custom training solution for your team. This offering combines targeted consulting, personalized coaching, and customized training to create appropriate and effective training solutions. Our virtual classroom coaching experts focus on maximizing the skills of your team and design of your existing program to take full advantage of the virtual learning environment. 

AssessmentAndCoaching-1

 

Find Your Why. Tell Your Story. Get Results.

Find Your Why. Tell Your Story. Get Results.

We're excited to have Christine Miles, Co-founder & Chief Architect of Ci Squared, explain the importance of your personal story to your...

Read More
Checklist for Creating Culturally Inclusive Presentation Visuals

Checklist for Creating Culturally Inclusive Presentation Visuals

Seven Crucial Considerations— Download the Infographic! Images play a crucial role in enhancing engagement. However, if these visuals only appeal to...

Read More
11 Effective Strategies for Increasing Virtual Classroom Engagement

11 Effective Strategies for Increasing Virtual Classroom Engagement

When virtually training remote and hybrid teams, engagement is key to the program’s effectiveness. Yet finding active and creative ways of engaging...

Read More