Not many people really like writing assessments of any kind, whether the end product is a test, quiz, or complex certification examination. In fact, I have always thought of learning assessment as one of the toughest challenges facing learning and development (L&D) practitioners.

Today’s L&D practitioners are typically not statisticians, assessment specialists, standardized test writers, or learning psychologists. Their responsibilities are more likely to be broader and deeper than that, because they have an overall responsibility to prepare their learners for the realities of working in a technology-mediated, information-rich, and increasingly collaborative workplace.

The need for practical assessment tools and resources

Here’s the point: We need to provide assessment writing tools and resources for members of the learning and development profession who are not assessment specialists, but still want and require specialized guidance in this increasingly important skill area. Providing those practical tools, templates, professional perspectives, and resources for creating today’s learning assessments is the goal of our latest eBook Writing Assessments to Validate the Impact of Learning.

Edited by experienced learning practitioner and assessment expert, A.D. Detrick, our eBook begins with Jane Bozarth’s insightful introduction and then goes on to present current perspectives from several industry thought leaders, including Mike Dickinson and Marc Rosenberg. We also provide usable guidelines, downloadable templates, assessment websites, and other practical resources for all aspects of hands-on learning assessment. These include in-depth references, an annotated bibliography of the Guild’s assessment resources for further reading, and a glossary of terms for those new to the specialty field of learning assessment.

Selecting the best type of assessment item

Table 1 shows an example of these detailed guidelines. Of these six question types described below, “multiple choice” is often the preferred item type for most cognitive tests because these types of test items are able to assess most of Bloom’s cognitive categories, and can also be quickly and reliably scored by an individual or by a machine. Bloom’s Taxonomy and the newer “Digital Taxonomy” are critically-important assessment tools. We provide detailed information in the eBook about the cognitive categories within the taxonomy as well as additional templates and web resources.

Table 1:Advantages and disadvantages for different types of questions

Question Type

Advantages

Disadvantages

True/False questions

  • Can be used to quickly assess multiple objectives
  • Very easy to write questions
  • Easy to grade/score
  • Can only be used on the lowest cognitive categories
    (Remember, Understand)
  • Easy to guess correctly
  • Almost impossible to determine reliability

Matching questions

  • Can also be used to quickly assess multiple items in a minimum of space
  • Good for an environment that requires a large amount of recall of facts
  • Only useful on the lowest cognitive categories
    (Remember, Understand)
  • Can be confusing and time-consuming for learners

Multiple choice questions

  • Can be used to assess most cognitive categories (Remember, Understand, Apply, Analyze, Evaluate)
  • Used to assess a learner’s ability to integrate information
  • Used to diagnose a learner’s difficulty with certain concepts
  • Can provide learners with immediate feedback about why distractors were wrong and why correct answers were right
  • Can cover a wide range of difficulty levels
  • Usually requires less time for learners to answer
  • Usually easily scored and graded
  • Does not allow learners to demonstrate knowledge beyond the options provided
  • Requires a great deal of time to construct effective multiple choice questions, especially ones that test higher levels of learning
  • Encourages guessing because one option is always right
  • Test takers may misinterpret questions

Fill-in-the-blank questions

  • Can be used to assess most cognitive categories (Remember, Understand, Apply, Analyze, Evaluate)
  • Requires learner to know the answer, not recognize the answer
  • Very difficult to write questions for the higher cognitive categories
  • Automated grading can be difficult

Short answer questions

  • Can be used to assess most cognitive categories (Remember, Understand, Apply, Analyze, Evaluate)
  • Requires learner to know the answer, not recognize the answer
  • Easy to write questions, as there are no distractors to create
  • Need to make sure there is only one correct answer
  • Scoring can be difficult and/or time-consuming
  • May encourage memorization instead of learning

Essay questions

  • Used on the highest cognitive categories (Analyze, Evaluate, Create)
  • Allows for a greater context for answers
  • Answers are more realistic and generalized
  • Can provide a more realistic and generalizable task for test
  • Usually takes less time to construct
  • Extremely difficult for test takers to guess the correct answer
  • Requires much more time to answer
  • Answers are only as good as the learner’s writing skills
  • Grading is more subjective; non-test related information may influence scoring process
  • Requires extensive effort to be graded in an objective manner
  • Requires more time to grade


Source: The eLearning Guild Research, 2016.

Best practices for writing assessment items

Writing individual assessment items, regardless of the type you choose, can often be the most daunting part of the process. Table 2 is an abbreviated list of best practices included in the eBook. These ensure that your questions are as effective as possible, and apply to all questions regardless of type.

Table 2:Best practices for writing effective questions

Best Practice

Question Guidelines

Write effective questions

  • Focus only on one thought, problem, or idea in each question.
  • Keep questions independent. Do not refer to any part of any other question.
  • Make sure the question addresses a very specific problem.
  • Make sure the question does not make any assumptions or rely on any context outside of the question; the question should be able to stand alone and provide all information needed to answer it.
    • Place critical and descriptive material early in the question.
    • Ensure the question includes all information needed to answer the question.
  • If the question includes a graphic, refer to the graphic in the question (i.e., “Identify X in the graphic below”).
  • Write questions in positive form. Avoid using words like NOT and EXCEPT in the question stem (i.e., “Which of these situations should NOT include an audit?” should be written “When should a situation include an audit?”).
  • Include the selection criteria in the question if the question calls for a judgment (i.e., “if____, then which is the best….?”).
  • Ensure the question tests the learning objective as defined in the curriculum, that it measures the appropriate cognitive level. This will help ensure that the question is neither too easy nor too difficult.
  • Do not instruct or inform in the question.
  • Do not use absolute terms, such as always, all, never, except, none in questions.
  • Do not give clues to the correct answer in the question or answer choices.
  • Do not include questions that give away the answer to another question.
  • Do not trick the participant. Test specific knowledge, not test-taking skills.
  • Do not write multi-variable questions: “How and when would professional skepticism apply?”

Review questions
for clear
and concise writing

  • Use a style guide for consistency.
  • Express complete thoughts.
  • Use active voice in the present tense.
  • Remove all irrelevant or redundant material.
  • Use economy of language (e.g., use “to” rather than “in order to”).
  • Avoid words with multiple meanings.
  • Avoid “window dressing” or superfluous information that isn’t necessary to ask the question and get a response.
  • Always strive for clarity and readability. Make sure that you’re not testing the learner’s reading ability or comprehension ability as well as the specific piece of knowledge for which you’re testing.

Review questions
for correct grammar and punctuation

  • Capitalize the same words the same way every time.
  • Write out terms followed by the acronym in parentheses the first time it appears in a question (i.e., Chief Learning Officer [CLO]), unless the acronym is being tested.
  • Try to write questions in complete sentences instead of questions which become complete sentences when read with an answer option (i.e., “A correctly placed tick mark will____.” should be written as: “Where would a correctly placed tick mark appear?”).


Source: The eLearning Guild Research, 2016.

Looking to the future of learning assessment

In addition to these practical resources, we also thought it important to look at the “future of assessment” practices and issues. Here is a brief sampling of what our contributors (including myself) said. As you will see, we often paint a less than enthusiastic picture of “where assessment is going” in the future.

“I’ve long warned of my concerns about so much ‘evaluation by autopsy’: our tendency to assess at the end of training or another learning intervention. We offer a smile sheet at the end of the day. We offer a multiple-choice quiz at the end of a module. The learning management system spits out reports about number of completions and average time to complete and average quiz scores … [But] none of that really tells us much about how well a learner can apply new learning, nor do we find out much about how to fix a course that isn’t working well. The future is bringing new tools to help us assess—and respond—as needs emerge or conditions evolve. There will be a quantum shift in the needs assessment phase—the beginning, not the end—of our work.” (Jane Bozarth)

“Probably the biggest irony of discussing the future of learning assessment is that the future looks so much like the past. Rather, the future looks very much like the past we should’ve been implementing all along. Indeed, technology has provided us with new and exciting ways to deliver assessments and capture assessment data, but the future remains relatively unchanged. We need to create valid, reliable, assessments that provide some certainty that the knowledge was a product of learning—and not guessing or prior knowledge. Then we need to use these measurements for the performance management of our learners as well as our own internal learning and development efforts.” (A.D. Detrick)

“My hope for the future is that instructors and trainers will view assessments as an integral part of the learning process, not an isolated side step, especially not one that focuses on lower levels of learning, such as terminology and taxonomies. I think that, too often, writers of paper-and-pencil tests (whether given on actual paper or on a computer screen) are lured by those very media into writing simple multiple-choice and true/false assessments, when more authentic assessment is within reach… So for the future of assessments, my hope is that we will continually challenge ourselves to go beyond the lower levels of learning when we write assessments, and that we will not be restrained by the media at hand (e.g., multiple-choice questions) from finding imaginative, more authentic ways to measure students’ learning.” (Mike Dickinson)

“If we want learning assessment to have a future, here are three suggestions. First, take some money away from your instructional design budget and build your organization’s evaluation expertise. You may produce fewer courses, but what you do build might have a shot at actually demonstrating real performance improvement. Second, move compliance training away from measures of attendance and completion to better measures of actual performance (a hard slog, I know). And finally, put your clients and customers in charge of evaluation by letting them tell you what constitutes success and then, together, you measure it … or not, and see what happens.” (Marc Rosenberg)

“We all hear about the need for assessments to produce better and more precise data. The most important use of these assessment data (as we’re told) is to isolate the precise impact of training interventions on business outcomes. This is an increasingly tall order for most learning organizations…  [Instead] we need to focus our future assessment efforts on gleaning actionable and practical assessment data rather than producing an ever-growing morass of precise data points that try to connect a single training event to a change in business metrics, such as sales or costs. At the end of the day, by focusing on actionable assessment data, we’ll save ourselves from an exhausting waste of effort and resources because—in most cases—precise assessment data aren’t required to produce practical, feasible, repeatable, and actionable results.” (Sharon Vipond)

Finding the pathway to better and more effective learning assessment

To conclude, I believe that our contributing editor, A.D. Detrick, has said it best: “For far too long, we have relegated assessments to an afterthought in the design of courses. Every instructional model that includes assessments will start with the assessment and build the course design from that, but I have seen very few courses designed that way. Instead, assessments are assembled at the last minute, and their design is primarily informed by an insufficient allotment of time. The job of writing the questions is often left to subject matter experts who have knowledge of the content, but (usually) no experience writing assessment questions. Worst of all, the analysis of the results is often completely absent. This is a common dysfunctional cycle, in which it feels useless to measure something that was poorly designed, and hard to properly design something that won’t be effectively measured… The purpose of this eBook is to compile enough information to provide a pathway that would allow anyone to write [effective] assessment questions, regardless of experience or role.”

To emphasize what A.D. Detrick and all of our contributors are saying: If followed properly, these guidelines and resources can help you find a better “pathway” to learning assessment. Avoid the pitfalls that often compromise today’s learning assessments with these immediate, practical steps that help break the dysfunctional cycle of bad measurement and bad assessment design.

References

Shrock, Sharon A. and William C. Coscarelli, Criterion-Referenced Test Development: Technical and Legal Guidelines for Corporate Training, New York, NY: John Wiley & Sons, 2008.