Calling all eLearning aficionados! We’re back with a brand-spanking new blog post all about eLearning assessments. When we work with subject matter experts to develop multiple-choice questions we see the same mistakes again and again that make a big impact on the validity and reliability of assessments. But don’t despair, we’re going to show you how to fix them!

eLearning assessment definitions

Before we jump right in, here are a few definitions of important terms you’ll see in this post:

Validity = An assessment is valid if learners who know the content get high scores, but learners who don’t know the content get low scores.

Reliability = An assessment is reliable if learners with similar ability levels get similar scores. These scores can be high or low, as long as they’re consistent.

To give you an example, if all learners who know the content well get similar low scores then the eLearning assessment is reliable but not valid. If all learners who know the content well get similar high scores then the assessment is reliable and valid.

Key = The correct answer in a multiple-choice question.

Distractors = The incorrect answers in a multiple-choice question.

Options = The answer choices in a multiple-choice question (the key + the distractors).

Now that you’re all clued up, let’s jump right into it!

Mistake 1: Giving away clues to the key

In every eLearning assessment, a learner’s test score = a) what they learned + b) what they guessed correctly.

That means that if you want an accurate picture of learner knowledge you need to make it hard to guess the right answer.

Some common giveaways are:

  • A key that’s longer than the distractors.
  • A key that contains a word or phrase from the question that’s not found in the distractors.
  • A key that’s more complex, precise, or detailed than the distractors.
  • “All of the above” or “None of the above” as a key.

Take a look at the question below (the key is marked in yellow). What do you think’s wrong with the question? (Hint: There’s a few things wrong with it!)

Why do Snafflegooks roar at dawn? Choose the correct answer. A) Their roar scares avian and aquatic predators. B) The song awakens the pack. C) The sound attracts mates.

You might have noticed that the key stands out for a bunch of reasons like:

  • It’s much longer than the two distractors.
  • It’s the only option that contains the word “roar”, which also features in the question.
  • It’s more complex than the distractors, with adjectives “avian” and “aquatic” applied to the subject “predators”.

To fix the question you can change distractors b) and c) to something like this:

Why do Snafflegooks roar at dawn? Choose the correct answer. A) Their roar scares avian and aquatic predators. B) Their roar awakens the alpha and beta pack. C) Their roar attracts healthy and fertile mates.

Now everything is nice and consistent with nothing jumping out giving clues away to the key!

Mistake 2: Using too many options

Did you notice anything else unusual about the Snaflegooks example question above? Perhaps you were wondering why there were only 3 options?

Ever since that first multiple-choice exam in school, we’ve all been brainwashed into thinking that multiple-choice questions should have 4 options (1 key and 3 distractors) and that’s carried over into how many people approach eLearning assessments. However, there’s a mountain of research that indicates that questions with just 3 options (1 key and 2 distractors) perform just as well (or better!) than questions with 4 options.

Let’s take a look at one of the biggest reviews of the topic. Rodriguez, M. C. (2005) gathered and analysed data from lots of other studies from an 80-year period up to 2005. They found that assessments that use 3 option questions are as good as or better than assessments with 4 option questions because:

  • There’s no change in overall assessment reliability or discrimination. Similar discrimination means the assessments are as good at differentiating between learners who know the content well and those who don’t. Similar reliability means that learners with similar levels of knowledge get similar scores.
  • Less time is needed to prepare two plausible distractors than three distractors. That means increased efficiency for question-writers.
  • More 3-choice questions can be administered per unit of time. That means more questions can be fit into an assessment, improving content coverage.
  • Questions with four or more choices may expose additional aspects of the content to students, potentially giving clues away to other questions, especially if all distractors are plausible.

There are plenty of other researchers in agreement if you fancy diving deeper into the topic, such as:

  • Vyas, R. (2008): “Our review of the literature suggests that MCQs [multiple choice questions] with 3 options provide a similar quality of test as that with 4- or 5-option MCQs. We suggest that MCQs with 3 options should be preferred.”
  • Schneid S. D. et al. (2014): “The results from this study provide a cautious indication to health professions educators that using three-option MCQs does not threaten validity and may strengthen it by allowing additional MCQs to be tested in a fixed amount of testing time with no deleterious effect on the reliability of the test scores.”
  • Haladyna T. M. et al. (2019): “The evidence is mounting regarding the guidance to employ more three-option multiple-choice items. From theoretical analyses, empirical results, and practical considerations, such items are of equal or higher quality than four- or five-option items, and more items can be administered to improve content coverage.”

Hey, don’t look at us like that – we’re nerds, of course we read the papers!

Mistake 3: Testing the wrong information

If your questions test irrelevant or trivial information, then the results from your eLearning assessment won’t represent what your learner knows and can do.

Ask yourself:

  1. Is my question clearly testing one of my learning objectives?
  2. Is my learning objective aimed at a high enough level to make sure my learners can do what they need to do after taking the course?

Check out the following diagram for a quick explanation of learning levels:

Remember: The learner can recall or recognise information they learned. However, they may not truly understand that information. Understand: The learner has a much more in-depth understanding of the information they learned than at the Remember level. Apply: The learner can apply the information they learned in the real-world to solve real-life problems and carry out real-life tasks

This is a modified version of the classic Bloom’s Taxonomy as suggested by multiple-choice question expert Patti Shank in her excellent book Write Better Multiple-Choice Questions To Assess Learning. In the past few years some assessment experts like Patti have been exploring alternatives to Bloom’s Taxonomy – but, that’s a topic for another day!

According to assessment gurus Shrock & Coscarelli, the best way to improve assessments is to test above the Remember level. That means that instead of just testing whether learners can repeat information, you should test whether learners truly Understand the information and can Apply it in relevant situations.

Below you can see questions about our free PowerPoint productivity add-in BrightSlide (*cough* subtle plug *cough*) written at three different learning levels: Remember, Understand, and Apply. Can you work out which level applies to which question?

1) What workflow should you follow when using BrightSlide to replace an icon in a presentation? 2) What is BrightSlide? 3) What BrightSlide feature should you use if you want consistent animations across multiple presentations?

In the above example:

  • Question 2 is written at the Remember To answer correctly the learner just needs to have a basic knowledge of what BrightSlide is.
  • Question 3 is written at the Understand To answer correctly the learner needs a much deeper knowledge of BrightSlide and the different features it includes.
  • Question 1 is written at the Apply To answer correctly the learner needs to know how to use BrightSlide to carry out a specific task.

Mistake 4: Writing questions that are hard to understand

Multiple-choice questions that are hard to understand place an unfair cognitive load on learners. If your learners have to think hard just to understand the question, then they might struggle to get the right answer, even if they know it! Common issues that make questions hard to understand are include:

  • Inclusion of irrelevant details
  • Unconcise sentences
  • Unusual words
  • Spelling, punctuation, and grammar errors
  • Unexplained acronyms

Here are some tips to make sure your eLearning assessment questions are easy to understand. This reduces the cognitive load so that you’re testing learners on the content, not on their reading ability.

  • Write shorter sentences that only include essential details.
  • Write in plain language using common words. Your learners shouldn’t need a dictionary just to understand your question!
  • Avoid negative language e.g. “not”, “isn’t”, and “won’t”. Negative language is harder for learners to get their heads around, making it trickier to focus on and understand the question.
  • Check spelling, punctuation, and grammar.
  • Use scenarios appropriately to raise the level of the question from Understand to Apply. Don’t use scenarios just to bulk out your questions. To learn how to use scenarios well and how to design them in Articulate Rise and Articulate Storyline, make sure you check out Part 3 and Part 4 of our Designing effective eLearning assessments series.

The question below commits a few of these eLearning assessment-writing cardinal sins. Can you spot which ones?

You are in a meeting with a hospital management team who aren’t concerned about the proliferation of antibiotic resistance in they’re hospital. Why should the team be disquieted by the propagation of the antibiotic resistance in the hospital?

This example has spelling mistakes, uncommon words, and negative language. It also has a scenario in the first sentence, which in this case adds nothing to the question and makes it needlessly long. How would you re-write the question?

Here’s how we would do it:

Why should a hospital management team be concerned about the spread of antibiotic resistance in their hospital?

Mistake 5: Using distractors that are easy to discard

It’s easy to fall into the trap of writing distractors that seem plausible at first glance, but when you look closer they’re clearly wrong. You need to make sure all distractors seem like a potential correct option to learners who don’t know the key. There are two ways to do this:

  1. Use common misconceptions as distractors. This is where subject matter knowledge is essential. If you aren’t a content expert, then reach out to your subject matter experts and ask them what mistakes people usually make when carrying out task X or solving problem Y. These make great distractors.
  2. Incorrectly modify the key. This means changing the key by replacing parts of it in a way that still seems plausible but is incorrect. Here’s an example:

Key: The sales manager must review the contract within 24 hours. Distractor 1: The sales administrator must review the contract within 24 hours. Distractor 2: The sales director must review the contract within 24 hours.

We changed the subject of the key from “sales manager” to “sales administrator” and “sales director”.

Similarly, you could alter the timing at the end of the question:

Key: The sales manager must review the contract within 24 hours. Distractor 1: The sales manager must review the contract within 48 hours. Distractor 2: The sales manager must review the contract within 72 hours.

Can you think of any other ways to turn the key into a distractor?

So there you have it! As you’re writing multiple choice eLearning assessments you just need to ask yourself:

  1. Are you giving away any clues to the key?
  2. Do you need four options? Can you just use three?
  3. Is your question clearly testing one of your learning objectives? And is your learning objective aimed at a high enough level to make sure your learners can do what they need to do after taking the course?
  4. Is your question hard to understand?
  5. Are all of your distractors plausible?

Happy elearning assessment writing!

This article was inspired by a webinar given by Patti Shanks. Patti’s book Write Better Multiple-Choice Questions To Assess Learning is the holy grail for assessment writers and contains a treasure trove of invaluable information.

And if your still hungry for more nuggets of BrightCarbon knowledge, check out our Designing effective eLearning assessments series. This has a ton of info that we didn’t have time to cover today, and takes you all the way from the initial theory and prep work to actually designing your own assessments in Microsoft PowerPoint, Articulate Rise, and Articulate Storyline!

Leave a comment
Written by

Roberto Padovani

Senior consultant

View Roberto Padovani's profile

Related articles

Apr 2024

Creating employee training for a large organisation can feel like playing whack-a-mole at the fair. Just when you’ve solved one need for one group, another springs up. Just when you’ve placated one stakeholder another appears out of the blue. You’re sweaty and stressed and it feels like the game will never end! Get it right and you’re rewarded with a big bag of candyfloss and the warm glow of knowing you’ve changed behaviour for the better. But getting there without help can be a real challenge.

Apr 2024

For some of us, TikTok is simply the sound a clock makes, but the video app has captured a generation – Generation Z. Does this technological divide represent a generational gulf? Are Gen Z really a species all of their own? And what does that mean for your online training strategy? Let’s find out!

    Leave a Reply

    Join the BrightCarbon mailing list for monthly invites and resources

    Tell me more!

    BrightCarbon is our “go to” for all of our professional presentations, always delivering high quality projects on time and on budget.

    Cynthia Rogan Apex Learning