12 min read

[Podcast] Evaluating Hybrid and Virtual Learning

[Podcast] Evaluating Hybrid and Virtual Learning
InSync Training Podcast - Modern Learning on the AirEvaluating Virtual & Hybrid Learning can seem complicated. But it's possible!

When it comes to evaluating the effectiveness of hybrid and virtual learning, things can get pretty complicated. Unlike traditional instruction, determining whether or not learners are actually learning in a hybrid or virtual setting can be tough. There are so many variables to consider! In this podcast, Dr. Charles Dye & Karen Vieth explore some of the complexities involved in measuring hybrid and virtual learning success.

Don't let the word "EVALUATION" scare you off! It's a really interesting episode. Click below to listen, or read the transcript. 

 

Transcript:  

Welcome to Modern Learning on the Air, the InSync Training Podcast. In conversation with some of the top leaders and thinkers in the modern virtual learning space. We will learn about the latest virtual classroom techniques, creative training initiatives and virtual training best practices that will engage and empower your teams, colleagues and learners. Enjoy the show.

KAREN VIETH : Hello everyone. My name is Karen Vieth, Vice President of Virtual Learning Services here at InSync Training. Welcome to Modern Learning on the Air. Today, I will be talking with Dr. Charles Dye, Head Researcher and evaluation expert here at InSync Training. Our topic is part of the, It’s Complicated Series, where we’ve discussed in the past hybrid learning from an instructional design perspective. Today, we turn our attention to evaluating hybrid and virtual learning. Hello Charles.

CHARLES DYE : Hi Karen.

KAREN VIETH : Thanks so much for joining us today. I’d like to just take a moment for you just to kind of give us a quick background and let us know how you came to become interested in the efforts of evaluating hybrid and virtual learning.

CHARLES DYE : Well, this goes all the way back to when I was in uniform, working at the submarine school in Groton, Connecticut and we had developed a set of distance learning initiatives that included, back then we called synchronous learning, but virtual live and it was to a hybrid audience. Some of the learners could have been in Guam, some of the learners could have been in Pearl Harbor and others could have been in the European theater, and as a consequence, we had to figure out how to adapt out methodologies to look at whether the training was effective or not. And so, from there I went into my research efforts and looked very hard at program evaluation as it applies to a training program. I want to give everyone listening to this a kind of heads up, I will try not to rant too much, but we in the training industry collectively are guilty of kind of hand waving at program evaluation, but there is serious methodology behind program evaluation that needs to be applied. And particularly when we are talking about virtual hybrid. So, that’s how I got started, Karen. And really, now what I’m looking at is, what does the addition of a hybrid workforce, hybrid delivery and virtual training bring to program evaluation in training?

KAREN VIETH : Well, and what better person for us to have this conversation with, but you, because of your extensive background. I mean, I didn’t even realize really that it dates back as far as it does with your efforts of distance learning back in the Navy and the idea that that was even a hybrid learning environment. It kind of blows my mind, right, because it’s such a hot topic right now, but really, we’ve been doing this for a very long time. So, when you think about data, what would you suggest, or what types of data would you suggest to our listeners, as they start to begin their efforts of evaluating this type of hybrid and virtual learning?

CHARLES DYE : That’s a great question, and if everyone listening comes away with this with one message, it’s my heartfelt request that the only time, and the only effort you make at data collection was to collect data that you have a use for. And Karen, you talk about how far back hybrid goes, but in fact, what most people listening think of when they hear about training and program evaluation is the Kirkpatrick model. And Kirkpatrick did some very, at the time, groundbreaking, and research in trying to formalize an approach to methodology, and he created a 4 level, now 5 level kind of hierarchy and taxonomy of an approach. But, the challenge there is one, it’s a behavioral model approach, which means, if you are in charge of training in your organization and you have an organizational outcome you want to see, his approach gets, it gets very difficult to apply Kirkpatrick all the way up to the 4th level of organizational outcome for a training intervention. As you go up Kirkpatrick’s levels, they get incredibly complex. I’m not bashing the work, it was very important to do, but the challenge is that most people have adopted it because it’s really easy to get level 1 data out there, which is how do people react to the training. And, if we think about adult learning principals and the research progeny there, people are pretty good at deciding if they are reacting to something a certain way; I liked the training, I didn’t like the training, I enjoyed the training, I found it useful, but how effective and how much of an expert is the learner in evaluating its relevancy, really? And how much do you as a program provider really care if they liked it or not? What’s far more important is the transfer and outcomes from that transfer on the organization because, quite frankly, the only reason you are having training at all is to affect an organizational outcome. You’re not giving training just because you want to. You either have a compliance requirement or an organizational change requirement or one of the host of a different things that might cause that to come about. And so, hybrid has been around a long time, but Kirkpatrick has been around even longer. He wrote his first kind of categorical approach to this in 1969. So, that is more than 50 years ago. And program evaluation has come a long way since then. And so, what I would suggest when looking at hybrid, is to approach what I call a, what is generally called a systems approach, which looks holistically at the training intervention and what you expect to see throughout the organization and it forces the training program admins that are listening, the designers that are listening to think, not just about the learner and what we’re trying to transfer to him or her, but also what does that transfer cause in the organization. And when we talk about hybrid and virtual, we bring in another dimension that quite frankly Kirkpatrick really can’t accommodate too well, and I’ll explain why in a minute, but we add this variability in the learning environment. Some people will be together, some people won’t. Some people will be face to face in person, and others won’t. And so, we need to be able to factor that in if someone, for example, is predisposed not to enjoy being on zoom, for example, then that’s going to negatively impact their ability to engage with the content. They’ll come in to that, if we think about the inquire model, they’ll come in negatively emotionally engaged. So, a systematic approach to program evaluation allows to factor that out and look at the outcomes and the organizational consequences of those outcomes, all as part of your evaluation protocol.

The other thing I’d suggest is that you think very carefully about how you want to collect the data. My general approach when I’m asked to do this at enterprise level, is the mantra is data is hard. Manual data is particularly hard. So, if you can automate data collection, for example, we have one client at InSync, they can provide training on a software set of tools, and they know every learning in their client population that attends the training and they can look at pre and post integration behaviors and how that is affecting their use of the software in support of their organization. So, then they turn to their particular clients and say, look, 45 people have been to trainings, and they have done much better in accomplishing a certain set of tasks, and that enhances compliance or what have you, which is the organizational goal. So, when you’re collecting data, automate it, and also make sure you collect data around learning environment along with treatment.

The other thing I would suggest is, look very carefully if you are going to stick with a behavioral kind of approach under Kirkpatrick, look very carefully at the questions you asked in your reaction instrumentation at the end of the delivery. A lot of people call those smile sheets or level 1 surveys or what have you. The reason being, again do you really care if they like it? Much better questions revolve around adult learning principles about relevancy, utility, applicability to the job and the like. And, the other thing to think about with respect to hybrid and virtual is, we talk about one of the strengths of those tools, Karen, as being enhanced outreach. I can get training out a lot faster to a lot more people. Well, in Kirkpatrick’s model, you couldn’t even get to that kind of discussion until you are at a level 3 or 4 evaluation and very few organizations ever do level 3 or 4 evaluations. They are expensive and time consuming, frankly. If we use a systematic approach, we automatically accommodate that as saying, instead of 5% of the learner population that needed to see this, 45% or whatever the metrics are that you collect, so you can evaluate the impact on the organization instantaneously, based on nothing but attendance. So, it’s important to understand and anticipate that data before you deliver. Does that answer your question? That was long-winded, I know.

KAREN VIETH : Wow, but no, I mean, you answered a lot of the questions that actually I had planned to ask, because a lot of people, a lot of organizations do use Kirkpatrick’s level of evaluation and, you know, I mean, when you said level 1 smile sheets and you were talking about, you know, do we really care, I think I’ve always said, I don’t really care if my audience liked the training or not, it’s is it impactful and so if you think about the systematic approach that you just mentioned and as you were talking about it, I’m kind of like, cheerleading behind the scenes, just how are we collecting the data? How are we automating that data? How are we connecting that data from learner? And I called this, I wrote it down and called it like the ripple effect from learner-to-learner transfer, right? That’s really what it boils down to. We care if we are making an impact on learning, and how they’re transferring that learning back on the job. That’s really what we care about. And I love it also that you brought in the environment as well as the treatment. Because as we’re moving into the hybrid environment kind of nationwide, worldwide, globally, whatever you want to say, and we’re kind of going back to the hybrid, again, however you want to say it, we have to start thinking about that environment as its own environment and then the treatment as its own treatment. So, I just love that.

So, as organizations are looking for this evaluation approach, and maybe they’ve used the Kirkpatrick’s level 1-5, and as you stated and I’ve witnessed, many, many don’t get past level 2, what would you suggest that organizations use for an evaluation approach that will help to measure the effects and outcomes of a hybrid and virtual training initiative?

CHARLES DYE : That’s a great question. I told you I was going to revisit Kirkpatrick and kind of talk about that. So, turn into, you know, late 60s into 70s, Kirkpatrick’s doing a lot of his work. The next big jump in program evaluation, not even directly associated with training, but program evaluation generally as a methodology, the big set of research changes came about in the late 80s through the mid-90s, and of course, again, we’ve gotten even better since then, but the kind of ground theory was laid out then and I, personally, am very much, what’s called a CIPP model guy. CIPP or Context Input Process and Product, in our case it’s not a product, it’s an outcome of a training program. You essentially build a logic diagram to say, okay, if I train all my learners on X, this will occur immediately. This will occur in the intermediate timeframe and this will occur long term. So, you might imagine some kind of compliance training when you say, well, you want to teach people how to do something better or comply with a new requirement, what have you, in the immediate sense, we have to do the following things, and what CIPP allows you to do, along with some of the methodologies that are its progeny, is accommodate all of those external things like environment or like a national labor strike or like an economic downturn. You know, everything from macroscopic things down to specific learner things and accommodate them in your program evaluation approach, so that when you collect data, you know where it is going and what it applies to. So, I’m collecting attendance, okay, that’s a short-term thing, I should be looking for individual learners, but when a department gets a certain fraction of its learning population covered, I should expect to see the following behaviors change in that department, as an example or a national, you know, what have you, population within the global company. So, what CIPP and similar models as a systematic approach allow you to do very broad analyses of outcomes on the organization. So, if you are an organization that’s looking for that kind of information, that’s a much better approach, however, one of the strengths of Kirkpatrick is it focuses on individual. And if you want to know granularly each individual learner’s outcome and how it affected that individual, the data is available in an approach like CIPP, but I would say, you know, philosophically, you’re not that much interested in. What you are more interested in is all the other peer learners and collectively how is their behavior changing? And depending on your requirement, that might be sufficient. You know, there’s no such thing as a free lunch in program evaluation, but what I think we find is, that depending on what information we want to gather and develop, that will dictate what data we want to collect and I think it’s important to distinguish information from data here. Data is data. It’s numbers and you know, text fields. But when you aggravate them in meaningful ways, that data, they will tell you information. They will convey information that is actionable. And that’s the other strength of CIPP. CIPP will tell you what exactly to do with the data and what actions to take. Do we continue the training? Do we change it? Do we supplement it? You know, it’s hard to tell in Kirkpatrick if the training needs to be changed. People really enjoy it. People find it meaningful and relevant. Okay, but we’re still not doing things well. Well, do we change the training or is it the learners? You know, you lose the relationship between organizational goal and learner outcome. Unless of course, you go all the way up to level 4.

KAREN VIETH : Right, it’s almost where you’re kind of just stuck in the emotional reaction, right? And you’re not getting beyond that.

CHARLES DYE : Yeah, well, it certainly can, you certainly can, and I think a lot of practitioners do. You know, they start hiring and firing facilitators based on, you know, response survey data. Well, emotional response in the moment, post-delivery, while timely and accurate of that person’s response perhaps, you know, your version of a 4 and my version of a 4 are two very different things on a 5-point scale. And so, depending on sample sizes and learner population sizes, you can start making very bad decisions or making inaccurate decisions based on improper interpretation of data.

KAREN VIETH : Yeah, because it can be so subjective, yeah.

So, last question as it relates to true learning transfer as we kind of really are focusing in on that’s really what we need to be doing, not necessarily determining whether we like something or not, but how does it transfer back on the job. I think you have mentioned some resources and references that you wanted to make in terms of how we can evaluate if people are really getting the content that is being presented?

CHARLES DYE : Yeah, associated with this podcast, there are going to be some links to some academic research. I promise it’s not going to be super dry. They are going to be fairly short reads published in the first and second decade of this millennium and really, they talk about the benefits and goals of program evaluation, generally as they apply to training programs. I would submit that the other thing you want to think about, particularly with respect to virtual and hybrid, is to make sure you collect information around delivery environment, and include that as part of your data collection protocol when you start your program evaluation. It is profoundly important to understand that if people are affected negatively by the learning environment, the are in turn, going to be affected negatively by, or more inclined to react negatively to the treatment. And conversely, an excellent treatment designed for the hybrid or virtual environment can dispel all of that problem. But we saw exactly not that at the beginning of the 2020 Covid 19 pandemic. If you cast your mind back in the US and in Europe, primary and secondary education was said, okay, you’ve got a week off, take your program virtual. And these educators didn’t, you know, and that’s primary and secondary ed, but post-secondary ed, and corporate all had to do the same thing. They just suddenly starting having video calls with PowerPoint. And that’s not designed for a hybrid or virtual delivery. That’s well, you know what, we can talk about that for another hour, but you know, it’s not optimized for that delivery environment and so, big surprise after 18 months of being subjected to that kind of training, people are really turned off by it. I would say that the hybrid and virtual environment is at least as capable of delivering different types of training as an in-person, you know, gold standard viewed classroom. The question is, what’s the best environment for the particular training intervention you are trying to do? And I know you’ve had other podcasts around learning environment authenticity and that kind of thing, but just the shorthand summary is, if people are going to be doing this work on their desktop, the chances are they should be learning it in their desktop. Conversely, if they are going to be learning interpersonal skills in face-to-face communication, chances are an in-person face to face kind of delivery environment is more appropriate. This isn’t magic. It makes perfect sense, and in fact, the data is very very strong to indicate that that’s the case. So, when you take a step back and think about evaluating your program, make sure you have data around delivery environment. Make sure you have data around whether the content was treated for that environment, because otherwise, you know, you really are kind of just feeling around in the dark.

KAREN VIETH : Yeah, so I’m hearing you say is it all boils down to taking the time to reflect, right? Reflect and evaluate the programs, the outcome of those programs, taking a look at the delivery environment, the treatment of that delivery environment and what is working, not just to check a box for training, but what’s working to actually transfer that back on the job.

Thank you so much, Charles, for joining us today and taking the time out to share your expertise. For learners who want to learn more about evaluating the virtual classroom at InSync Training, we do have a course dedicated to evaluating virtual and hybrid courses, our virtual and hybrid training that’s located under our Train the Trainer section in our Trending Expert Workshops. You can find us at http://www.insyntraining.com.

Thanks again for coming on InSync Training’s Modern Learning on the Air. Take care.

 

[Podcast] Virtual Learning: The Numbing Effects of the Information Age

[Podcast] Virtual Learning: The Numbing Effects of the Information Age

Are You Bored Yet? Probably. Zoom fatigue is real. We live in a time of information and cognitive overload. And it's become worse since the Covid-19...

Read More
Podcast - Blended Learning vs. Hybrid Learning...it's Complicated

Podcast - Blended Learning vs. Hybrid Learning...it's Complicated

Modern Learning on the Air with Jennifer Hofmann, Founder and President and Karen Vieth, Director of Services at InSync Training. Jennifer and Karen...

Read More

ELC 025: Insights for Designing Blended Learning

Today's learning environment is wide open. We can choose from a multitude of technologies, modalities and techniques. But what's the best way to...

Read More