Contact Us

Top 10 eLearning question writing crimes

Jamie Paddock our eLearning Consultant intern looks at the top 10 question writing crimes when creating and designing eLearning.

Although we live in a brave new world of super-fast broadband, 4K TV, AR, VR, and learning ecosystems, the humble multiple-choice question still has an important role to play in the world of learning.

There is considerable skill involved in writing good MCQs and lots of ways of writing bad ones too. No matter how well the graphic designers wrap them up in creative and elegant clothing, those old familiar flaws still emerge blinking into the daylight with remarkable frequency.

So below is a “top ten” list of common question-writing ‘crimes’. As bad question writing at Access Group is a fantasy that will never happen, we’ve set up a ‘Federation For Racing Unicorns’ (FFRU to you and me) and use them as a light-hearted topic to show how NOT to write MCQs and the ‘crimes’ involved…

1. Obvious questions. 

You may not know much about racing unicorns, but you can probably pick the right answer to this question:

You work in the anti-doping team of the Federation For Racing Unicorns (FFRU). The latest test results have just come in and a famous racing unicorn is showing a positive test. What should you do?

A. Destroy the results as they are clearly incorrect

B. Pass the results on to a junior member of the team to deal with in case they later prove to be incorrect

C. Speak to the rider concerned to tip him off

D. Follow the URF standard procedures. Report the positive results to the anti-doping board, complete from DRB6b in triplicate, keep the pink copy for your file and return the other two by recorded delivery to URF HQ

2. Labelling questions

 We still see too many of them and they usually indicate that the writer doesn’t really understand the subject or is being paid by the question! Take this example. Is this really testing a key learning point that’s going to help me do my job better or is it just lazy question writing?

Which clause in the FFRU Code of Conduct is concerned with the allowed length of a horn?

A. Clause 3

B. Clause 4

C. Clause 7

D. Clause 9

Other examples include: ‘In what year…?’ ‘What do the initials FCA stand for…?’, ’Which Act introduced…?’, ‘How many Data Protection principles are there …?’ and so on.

3. Arithmetical questions 

Where the distractors are just ‘placeholders’ and not the result of logical misunderstanding/miscalculation.

If six unicorns set out through a mysterious forest and three strays from the group, how many are left?

A. 3

B. 23

C. 43

D. 14.75

This ‘crime’ is often instigated during the SME updating process when, say, a tax rate changes. The writer updates the question stem and correct answer but ignores the distractors, which then become arithmetically improbable.

On a related point, remember wherever possible to keep the arithmetic simple - unless you are testing mathematical skills. Focus on the principle, not the maths.

4. Negative stems

Often compounded with negative answer options, these questions are unnecessarily confusing for the student.

Which of the following is NOT a drawback of having an extra longhorn?

A. They don’t make unicorns light enough to race

B. They find grazing difficult

C. They don’t make it easy to outrun lions

D. They get their heads stuck in the ground

Confused?

5. “All of (None of) the above” as an answer option 

Normally the correct answer anyway, it becomes even more inappropriate when the order of the answers is randomised!

Who is responsible for reporting breaches of the FFRU rules?

A. Competitors

B. The referee

C. The federation inspectors

D. All of the above

6. Lack of (or inappropriate) alignment between the stem and answer options

If only one answer (or in extreme cases no answer) aligns with the stem, the question is flawed.

Fast unicorns will usually defeat other animals in a race because they ….

A. Have stronger legs

B. Other animals slow down in rivers

C. The size of the horn

D. Fast unicorns can out-run Pegasus

On a related point, a well-written assessment question normally has a ‘focused’ stem - it should be possible to formulate a response before looking at the answer options.

7. Answer options aren’t discrete 

In the example below if option D is correct then options A and B which are components of this answer, are also valid responses.

To enter the Great Unicorn Race, what, if anything, do you need to do?

A. Complete the online entry form

B. Pay the race entry fee

C. Join the URF (Unicorn Racing Federation)

D. Complete the online entry form and pay the race entry fee

8. Subjective question stem

‘What would you do?’ ('What do you think?' How will you respond?) This may be suitable in exercises within an eLearning module but is not appropriate for objective testing where you are seeking a correct response rather than an opinion.

While competing in the Great Unicorn Race, you spot a unicorn in difficulties in a river.

What action will you take?

A. Gallop on, as you are there to win

B. Laugh and point him out to other competitors

C. Stop and try to help

D. Look for the nearest rope to pull the Unicorn out

Depending on the character of the respondent, any of the answers are possible and all of them might be valid.

“What should you do?” is marginally better but still too subjective.

9. Answer options that contain ‘never’ or ‘always’ 

Giving a clue that these are distractors, especially when included with a ‘usually’ option.

Which one of these statements about FFRU official unicorn races is correct?

A. Pink unicorns always cheat

B. Fast unicorns usually win

C. Intelligent unicorns never win

D. Small unicorns always fall over

10. If you can’t think of a tenth question, what should you do?

A. Don’t write one

B. Desperately try to write just one more even though you’ve already run out of learning points

C. Rename your article as the ‘Top Nine’ question writing crimes....

 

Explore our Access Digital Learning and Compliance software