Thursday, March 1, 2012

Give Tests a Test Run

By Shelley A. Gable

After drafting an assessment for training, what steps do you take to validate that it is realistic and accurate?

Many of us probably have assessments reviewed by subject matter experts (SMEs). But have you considered administering the assessment to a group of experts as well?

First, let’s look at what makes for a quality training assessment...

Just to be clear, this post focuses on assessments that take place while the initial training event is in progress (i.e., Level 2 assessment). Some of what I’m suggesting here may or may not work well for on-the-job assessments designed to continue ensuring transfer to the job after the initial training (i.e., Level 3).

At a high level, a quality assessment should:
  • Align with the agreed-upon performance objectives
  • Challenge learners to solve realistic (and relevant) workplace scenarios
  • Consist of questions or tasks with a clear success measure (e.g., one right answer to a question, an objective rubric for evaluating performance on a task, etc.)
  • In the case of multiple choice questions, offer reasonable distracters that represent common mistakes and misunderstandings
Okay, I’m sure there are other important principles I haven’t mentioned...but I wanted to highlight this handful, because they relate to other ideas in this post.

Now let’s think about the SME review...

How do your SMEs typically review assessments? Do you email them with a request to validate that the assessment is accurate? Do you discuss the assessment together in a collaborative review session?

Personally, I find that if I email a SME something for review, most reviewers skim for accuracy and call out any errors. However, I’m not sure that they’re on the lookout for omissions and realism. Granted, it’s my job to ensure quality on those fronts, but I need their help to do it.

So, whether I’m in a collaborative review session or in an email exchange, I like to ask the following about each assessment item:
  • Is the scenario realistic? If not, what details do I need to change or add so it feels real?
  • Does the correct answer represent what you would coach an employee to do in the given situation?
  • Do any of the distracters seem like feasible options someone could make a compelling argument for (in the case of multiple choice questions, this helps ensure there is a single best answer)?
  • Do the distracters represent mistakes you’ve seen people make?
And how about doing a test run of the test?

After putting the knowledge assessment through your review process, consider administering it to a group of SMEs to see how they do. This might help reveal any shortcomings prior to using the assessment with actual learners. Watch for items with low success rates and any distracters that were frequently selected. You might even follow up with testers to find out what prompted them to respond to certain items incorrectly.

This can work especially well if you are attempting to close a performance gap, and you have a group of exemplar performers to test with. Whether you have the ability to use a large group of exemplars or you only have access to a couple, this extra test run can be well worth the effort.

Do you have other methods?

How do you ensure that your assessments are realistic and accurate? Do you use any of the tactics above? Do you have other approaches? Please share!

No comments:

Post a Comment

Thank you for your comments.