Why Pilot Evaluation In eLearning And How To Go About It?

Why Pilot Evaluation In eLearning And How To Go About It?
sasha2109/Shutterstock.com
Summary: Time and money have been spent in developing and conducting your eLearning evaluation. While combing through the results, you struggle to find relevant insights due to skipped questions or uniform responses. Pilot Evaluation in eLearning helps in early identification of skewed results, bringing rigor to evaluation.

Pilot Evaluation In eLearning: Reasons Why It Should Be Deployed

Conducting a Pilot Evaluation (PE) can help in the early identification of flaws which impede collection and analysis of objective, meaningful data while saving time, effort and money.

Pilot programs assess the feasibility of the eLearning evaluation before scaling up. This rehearsal and testing stage with a handful of participants, allows time to critique, test and iteratively improve the evaluation design and administration. Improving evaluation is more than mere vetting of questions and evaluator's thinking, to account for the pilot group’s perspective.

The PE is an opportunity for examining the ease of implementation as well as how the respondents understood and interpreted the evaluation. Further, this is an opportunity to check for subjectivity, plausible effectiveness in the administration and provides an empirical basis for any refinements that may be needed before large-scale implementation.

PE comprises of both Pretesting the survey/interview followed by Pilot Testing the administration and other running procedures. The former looks at the survey instrument and its methods of measurement from a validation perspective, while the latter focuses on how smoothly the evaluation was administered. Skipping either is not advised [1].

Let us consider practical steps in deploying a Pilot Evaluation. But before, ensure that your evaluation team has completed a self-check of evaluation tool (inclusive of face validity).

Mechanics Of A Pilot Evaluation In eLearning

1. Pretesting

a. Means For Administering The Evaluation

So you’ve created your survey design and thought of the method of deploying your evaluation for your pilot group. Using a digital form saves data entry and allows one to focus on data visualization. some vehicles like Google Forms, SurveyMonkey to Gizmos enable data visualization. Here’s a handy list of resource on eLearning survey tools.

b. Explicit Instructions

Your pilot group is a sample of your eLearning course audience. They need explicit information, instructions, and the rationale for being selected and the need for doing the PE. Consider providing details about: ‘expectations of PE members’ in terms of responses, time allocation, and navigating the PE tool as well as the eLearning product being under evaluation.

c. How Are You Considering Gathering Feedback On Your Evaluation Tool

Feedback on the evaluation tool is the sole reason behind this PE, thus getting valuable and objective feedback should be the aim. Such feedback can help bridge the gap between the evaluators’ and respondents’ perceptions. To obtain the desired feedback, evaluators could either follow up with questions or provide respondents with a feedback tool.

  • Depending on the location of pilot group members, you could consider an online or offline debriefings session. It comprises of interviews and discussions on the efficacy of the evaluation as perceived by the respondents, can be done at the individual or group level. Taking meticulous notes at this stage is useful.
  • You could also consider deploying a dedicated feedback tool which involves gathering detailed written feedback. This does not involve observations or interviews, rather just questions where open-ended responses are encouraged.
  • What could you possibly check for? Ambiguous instructions, skipped questions, little or no variation in responses (e.g. choosing yes/no for all questions), the order of questions, inconsistent scales.

d. Bringing Together The Pilot Group

Organizing the pilot group and its constituents is of utmost importance, for they are the ones validating your evaluation tool. Some salient steps to consider are:

  • Depending on your evaluation aim, you could choose experts or non-experts, ensuring the participant list represents a full range of the likely end users. Also, maintaining adequate sociodemographic strata (age, gender, socioeconomic status, ethnicity) enables testing the efficacy of the evaluation across varied groups, as well as allowing for the evaluation to be culturally responsive.
  • Reach out to more people than you need, in case of last-minute refusals.
  • Apprise pilot respondents of their solicited participation, of what the evaluation is and looks like. Personal contacts may be a better way of getting around this.
  • Consider incentives for long surveys. I know of many cases, where personal contacts did not respond since the survey was around an hour long.
  • Provide navigational and access instructions information regarding the evaluation and resource tool in the instructions and via email.
  • Allow at least a week to complete your Pilot Evaluation.
  • Record the completion rate and follow-up with the ones who have yet to respond.
  • Post-pilot, do express gratitude by thanking all participants.

e. After Pretesting

Once the results for both the evaluation survey and the feedback tool are received, it is paramount to strategically reflect with a view to identifying errors in the form, presentation, and administration of the tools.

Errors may arise even after careful planning, creation, and implementation of surveys. These can range from typographical errors to some ambiguous or overlapping instructions. Obvious errors which are overlooked can be spotted and corrected by a well-run pilot test. This reflection stage provides time and opportunity to re-assess any fatigue that may have caused unreliable responses.

Survey fatigue can come from long questionnaires, distracted and tired resources resulting in random or fake responses. If major revisions to the survey have been done, you may require another pretest. However, one must not succumb to adopt all changes suggested by pilot group members, for it could contaminate the overall aim of the evaluation.

2. Pilot Testing

This part of the Pilot Evaluation involves administering the PE, to check for inconsistencies in scheduling, necessary materials and resources (e.g. which software is best supporting the analysis) for the full-scale.

Pilot Testing usually involves an expert eye to oversee the entire evaluation process, such as sitting in a mock interview to gauge how biased/unbiased the interviewer is. This stage does not involve revising the questionnaire, just checking the style, verbatim, body language for the interview.

Finally, we must be wary that even after the best pretesting and Pilot Testing, errors in the full-scale study, or discrepancy in the budget may still appear. These could be due to a result of low response rate, or excessive amendments as explained in ‘pt e’ above.

Planning a Pilot Evaluation in advance will allow for a smooth analysis and revision before the full-scale evaluation.

References:

[1] Erin Ruel, William Edward Wagner, III, Brian Joseph Gillespie; The practice of survey research: Theory and applications (2015). Beaverton: Ringgold Inc.