Wednesday, June 9, 2010

Collecting Data from an eLearning Pilot

By Shelley A. Gable

At last! After weeks – perhaps months – of analysis, design, and development, you finally have a completed and fully functional eLearning course. Finishing that last step to completion is a proud moment. And a relief.

So now what? Roll it out to the masses!

No, wait.

Before rolling it out to the masses (assuming that’s your eventual intent), you should probably pilot the course with a small group of learners from your target audience. Run your pilot for a predetermined amount of time, collect some data to identify what worked about the course and what didn’t, and make some adjustments. Then, you might be ready for a full rollout. Or perhaps another pilot.

So now that we’ve decided to run a pilot, what’s the next step? At this point, many people are tempted to identify pilot participants and start drafting survey questions. But there’s a more systematic way to plan your evaluation.

First, identify the questions your pilot needs to answer.

If you’ve already started writing survey questions, set that draft aside for a moment. Forget surveys. Forget interviews. Just think about questions. What questions should your pilot evaluation effort answer about your eLearning course?

Your best bet is to work with your project team to identify these questions. And you’ll probably find it helpful to refer to evaluation models for guidance, such Kirkpatrick’s four level evaluation model.

Examples of questions might include:
  • Was the eLearning course and its activities easy to use?
  • Which topics or tasks did learners struggle with?
  • Did learners perform as expected on the job?


Of course, these are very general questions. There may be questions worth asking that are specific to your course. For instance, if you experimented with a branching type of scenario, you might ask a question about the effectiveness and/or appeal of that activity, specifically.

Next, identify who can answer your questions.

If you pulled out that survey draft, put it away again. At this point, we need to identify which stakeholders can answer the questions identified for the pilot.

Which answers must come directly from the learners? If the course has a blend of eLearning and instructor-led training, perhaps there are certain questions that trainers can help answer. Maybe there are questions that the learners’ supervisors should answer. Or maybe there are performance reports you should obtain.

Now, select data collection methods.

This is the step that many people mistakenly jump to first. Until you know what questions you’re asking and who can answer what, you’re really not in a position to make informed decisions here.

After all, the nature of the data you collect should be a primary driver of how you collect it. For instance, if your organization already has a reliable survey tool for collecting learner satisfaction for a course, it might make sense to use that survey. If you want to collect specific examples and stories from learners about their successes or lack thereof, your best bet might be an interview or focus group. If you need to measure on-the-job behavior, you might opt for observation. Naturally, many evaluation efforts employ multiple data collection methods.

Another driver of data collection methodology is resources. How much time do you have to conduct the evaluation? And what is the availability of your pilot participants? If your turnaround time is short, you might not have time to conduct several one-on-one interviews. If your audience is geographically disbursed, observation might not be practical.

When (and how often) should you collect the data?

Suppose you’re evaluating a two-week blended pilot course, and you intend to survey learners to measure their perceptions of the training. You’ll need to decide whether you’ll measure just once at the end, or whether you should collect data at earlier points as well. If you’re collecting on-the-job performance data, you’ll need to identify the appropriate times to collect data based on the tasks you’re measuring.

What else?

While this should be enough to get the gears turning, naturally, there are several other factors to consider, too. For example:
  • Who (and how many) should participate in the pilot?
  • How do you plan to analyze pilot data?
  • How and to whom should you communicate the results of the pilot?
  • What are the potential risks and mitigating steps for the pilot?


If you’ve evaluated an eLearning pilot, please share your tips and lessons learned in the comments. Or if you have questions or suggestions for future posts related to evaluation, please share those thoughts as well!

No comments:

Post a Comment

Thank you for your comments.