Sustainable Upskilling Solutions (SUSs): The Next Generation Of eLearning Assessment

smolaw/Shutterstock.com
Summary: Assessments or quizzes deployed via LMSs are topline and superficial in nature, which directly impacts reporting. Sustainable Upskilling Solutions (SUSs) resolve this by incorporating a comprehensive assessment suite within their technology stack.

SUSs: Easier With More Capability And Value

"No eLearning course, in fact, no course that aims at transferring knowledge to learners, is truly complete until it is accompanied by provisions to assess the outcomes of learning," so Ayesha Habeeb Omer states in her article. Significance of assessing, the actual state of eLearning assessment, has regrettably been in limbo for close to two decades and this is now only changing through the advent and deployment of "Sustainable Upskilling Solutions." SUSs have a somewhat different approach to learning and development compared to traditional LMSs in that they provide organizations with a mechanism to train and assess the entire workforce easily and, most importantly, sustainably.

The Assessing (And Reporting) Problem

eLearning assessment has, by and large, consisted of SCORM quizzes with multiple-choice questions comprising of a single and/or multiple answers. Some authors place impetus on matching question types, in which the learner is tested on the relationship between two sets of data. Other question types comprise entering words in a blank space, dragging and dropping of words or phrases, drop-down menus in which the correct answer must be selected, as well as word builders, akin to a crossword puzzle.

As if the most prevalent pitfall of assessing was not sufficient—namely ambiguous questioning—some of the above-listed question types truly open a pandora’s box: For example, if a learner must enter a word in the blank space and the learner misspells the word, then the supplied answer will be marked as incorrect, even if just a single letter is erroneously misplaced! This question type is also certainly impartial toward dyslexic learners. Ditto for word builders: While gamification can certainly have its place within a well-defined eLearning strategy, gamification in assessments can be a distraction and unnecessarily complicate the desired objective. After all (and in truth), one is assessing adults…

And that, in short, sums up the SCORM's assessing ability. The good news is that these are just the frontend limitations; the backend limitations are unfortunately even worse: SCORM, primarily due to its container-based architecture, makes provision for capturing only course completions, time spent in the course, milestone progression, pass/fail, and single scoring, which of course significantly curtails any meaningful reporting. SCORM’s age is showing newer standards as well. Specifically, xAPI (also referred to as "Experience API" or "TinCan API"), as well as CMI5, are hailed as SCORM’s successors. Precisely, these newer standards allow learning departments to track (and thus report) on just about any activity and learning experience that one can observe, which includes completing activities and simulations, performing job functions, producing work deliverables, completing a Khan University course, and so on, which is done via an externally managed Learning Record Store (LRS). While xAPI (and CMI5) appear as the holy grail when it comes to meaningful reporting as a result of better data management, the reality thereof is somewhat different.

Per the xAPI specification, an LRS is an external server that is responsible for receiving, storing, and providing access to learning records. In other words, an LRS sits externally to an LMS and is, in essence, a repository into which xAPI statements are recorded. Said xAPI statements can be expressed in the form of "actor verb object context." To illustrate: "Jane passed induction training 101 with John instructor" (actor = Jane, verb = passed, object = induction training 101, context = John instructor). LRSs come standard with an initial set of verbs (such as "attended," "completed," "attempted," "passed," "failed," etc.) and there is no limit to the number of verbs that a training department can create. The context is also not limited in terms of definitions. This "actor verb object context" is subsequently imported to the LRS from the LMS and subsequently exported to a report generator, for example, from which reports and analytics can be drawn upon paper. This seems straightforward and benign, however, for a common L&D department this is truly complex and something that warrants external involvement (in short: high probability of delays and complications).

While xAPI (and in turn LRSs) is a significant step up from SCORM-based reporting, the fact that each training department can create its own verbs and contexts means that common denominators are subjugated and consequently comparability eroded. While generating meaningful, comparable, and objective reports is technically a possibility, any statistician will confirm that the success and outcome thereof are directly proportionate to the quality of data from which these reports are generated. And if everyone is, in essence, free to do what they like, which they can do as a result of the aforestated verbs and contexts, the adage of "too many chefs in a kitchen" rings very true: The end result could very well end up in a calamity translating to meaningless, non-comparable, and subjective reports, back to square one.

The Sustainable Upskilling Solution

There is, however, a solution to the assessment problem: Sustainable Upskilling Solutions or SUSs. In addition to just taking content to learners, SUSs fundamentally quantify the value that organizations obtain through their training efforts, typically in line with a training evaluation model, such as the CIPP (Context, Input, Process, and Product) Evaluation Model, The Phillips ROI Model, or the Kirkpatrick's Training Evaluation Model.

SUSs make provision for all common "container-based" content formats, which extend to SCORM 1.2, SCORM 2004, AICC, HTML5, xAPI, and CMI5, in essence, the exact same as a "traditional" LMS. In addition though, SUSs also natively allow for common file formats to be disseminated, which include the likes of Microsoft Word, PowerPoint, Adobe PDF, interactive PDFs, MP4 video, MP3 audio, Adobe InDesign (INDD), YouTube, Vimeo, ThinkLink, and even external links. More about content formats in this article.

The rubber really hits the road when one considers how SUSs handle assessing: SUSs contain a comprehensive assessment suite that can be leveraged onto existing container-based (SCORM, xAPI, etc.) courses; this is truly unique! The key here is that the assessment sits outside of the container-based content so that meaningful, structured, and granular data can be extrapolated and fed back into the reporting subsystem. This is distinct in that the assessment suite allows the learner to benefit from the (existing) course content irrespective of the output format. Better still, the reporting that follows is truly next-gen, given that the standard reporting suite is truly comprehensive and goes well beyond that of a "traditional" LMS reporting capabilities, and all this without having to get IT involved, unlike an LRS! Should the standard reporting environment not be adequate, SUSs allow also for custom reporting that can even leverage data from external systems. Reporting, though, is the subject of another article.

The foundation of the SUS assessment suite’s capability lies within the Question Bank functionality. The Question Bank is a question repository into which questions are loaded, defined, and easily grouped. Subsequently, the course author can stipulate whether questions should be presented in sequential or randomized order, whereby the author can also stipulate to present a specific number of random questions from a question group within a Question Bank. For example, draw two questions from Group A (comprising 10 questions), four questions from Group B (comprising 15 questions), six questions from Group C (comprising 20 questions), and so on. The net result is that each learner will receive a completely different assessment. However, the results remain comparable due to the grouping functionality. Skills gap analysis, anyone?

The assessment suite (including the generation of Question Banks, questions as well as the management thereof) is managed directly on a SUS without the need for external assessment authoring tools. SUSs make provision for numerous assessment parameters, which include multiple-choice questions comprising single and/or multiple answers (that may contain images and videos as part of the question), free text answers, observational/practical assessments, surveys, as well as a document or file submissions. From a question definition perspective, SUSs allow for further customization, such as weighting of answers as well as negative scoring, should this be required. Assessments may be timed or untimed, whereby timed assessments may even be proctored.

Proctoring, or learner authentication via face recognition, is a relatively new phenomenon in eLearning, however, one that was somewhat questionable in the past, given that the learners’ data was (often unreliably) shipped to external parties who performed the authentication process. With SUSs, this level of artificial intelligence happens within the system itself, thereby maintaining a high level of confidential data fidelity.

SUSs typically also include assessment suite mainstays, such as moderation, certification, badges, and learner-to-author feedback options, to name but a few.

Conclusion

In closing, SUSs blur the lines between a number of platforms that were previously freestanding, specifically in this instance between a "traditional" LMS and dedicated assessment tools. The net effect is more flexibility for assessment authors, an elevated and more professional assessment experience for the learner, and a better return for organizations deploying the SUS due to the granularity, insights, and measurement that SUSs seamlessly deliver.