6 practical tips for applying assessment and quiz data to your eLearning content

6 Practical Tips for Applying Assessment and Quiz Data to Your eLearning Content

Course analytics data can reveal a wealth of information about the effectiveness of the assessments and quizzes you build. Read this article to find out how you can build your courses to maximize the usefulness of this data, and the specific items this data can help you improve.

1) Consider non-technical reasons for 0% or 100% pass rates

When you start getting data back from your assessments, any data at the absolute extremes of pass and fail will immediately stand out. If everyone got a question right, or if nobody has got a question right, that’s obviously unusual and should prompt further investigation.

If all answers are correct, the question is likely too easy. This may be an entirely acceptable outcome—learners may need to demonstrate quite elementary knowledge for compliance reasons. However, if the question is taking up space that more challenging or relevant questions could use, consider removing the question. Review whether the question is sufficiently difficult in order to rule out cheating and/or sharing answers—and investigate further if these may be valid explanations.

Similarly, if all answers are incorrect, re-evaluate whether the question being so hard is serving your learning objectives. Consider whether there’s something in the way the question is phrased that is causing everyone to fail—ensure it’s easy to understand and doesn’t contain misleading information. An extreme example would be telling the learner to select one correct answer when your logic requires multiple correct answers.

2) Look out for red flags caused by technical issues

You should also check to see if there’s a technical reason for your 0% or 100% pass rates. Test that correct and incorrect answers are being reported accurately, and that your underlying analytics implementation is working as intended.

Also look out for questions wholly absent from your analytics. This is most commonly an issue with your question banking implementation, and suggests that the question is simply never being shown to users.

3) Multiple attempts should prompt a rework

Does the data suggest that learners are having multiple attempts at your questions? This is most likely a sign that learners are trying to brute force the question (i.e. guessing answers) and your assessment strategy may need a rethink if this is widespread or undermining your learning goals.

In this scenario, you should rework the logic of your assessment to give learners a certain number of attempts before some kind of intervention takes place. This may involve failing them and providing remedial learning.

4) Ask for learner feedback on your questions

One simple change to how you construct your courses can give you a useful extra layer of insight: ask for feedback! An optional question at the end of your assessment is one obvious method—learners will be quick to point out questions that simply don’t work or that they believe are incorrect. Monitor the question and correct/follow up as appropriate.

Another approach that you can use alongside or instead of an end-of-assessment form is to allow learners to give a simple star rating on each question in the assessment. This can be more effective because learners aren’t always open to providing long text feedback, they may forget the specific question with an issue, and such feedback can be difficult to work with at scale. Set up an optional Likert-scale question type in your eLearning authoring tool and watch out for consistently low or high ratings that prompt further investigation.

5) Taking a long-term view on your assessment

Your first priority should be to establish that your assessment works ‘in the moment’: that it operates as intended, and that your employees can learn from it. Then comes the longer process of proving that they are learning from it.

For every behavior that you want to change through your course, you’ll need to have a method of measurement before and after your assessment. For example:

  • You could monitor your sales pipeline before and after your sales training. If sales improve across your workforce post-training, it can be inferred that the training is a positive factor. If sales fall, you would operate under the assumption that the training is ineffective, and needs to be reworked and rerun.
  • Similarly, customer satisfaction surveys can reveal whether a course focused on improving customer interactions is having the desired effect.
  • By monitoring your incident logbooks, you can link the effectiveness of health and safety compliance training to the trend in the accident record.
  • Via a pulse survey, ask trainees, bosses, and stakeholders whether the behavior is part of common or individual practice.

Beyond the passes and fails and individual correct and incorrect answers that eLearning standards such as xAPI can give you, the data that proves program effectiveness can come from all kinds of sources. Pulling these disparate threads together may be easier with a learning analytics program capable of interpreting this data automatically without the considerable manual legwork of fetching and crunching in complex spreadsheets.

6) Avoid jumping to conclusions

Finally, it’s important to remember that data shouldn’t necessarily prompt drastic action, and the full context of your data points should be considered. If you’ve had a recent influx of junior employees, a department-wide fall in scores is only natural. Customer satisfaction may have fallen because a team has become understaffed. Sales may have increased quarter on quarter not because of your training, but because of the new financial year.

This doesn’t mean that when positive and negative falls happen, a fear of ‘correlation not equalling causation’—the classic “shark attacks and ice cream sales” fallacy—should allow you to write off all possibility of taking action. If you hypothesize what could be changed to improve the course, build a new revision of the course based on that hypothesis, and test the new version against the old, you can slowly build even better courses.

About the author: Peter Dobinson

With a background in learning technology, Peter helps organizations implement and use Watershed. He also works with the industry to further the implementation of xAPI.

Contact our team for more information or request a free trial today:

Want to find out more about how to use xAPI data to build better courses?

We use cookies to give you the best website experience possible, including integration with social media and relevant advertising tailored to you. For more information on this and on how we use your personal data, please read the full Privacy and Cookies Policy.