Material Mega Menu - Responsive

I WANT TO LEARN ABOUT

      Learning impact - being specific

      by Helen Cassidy

      Learning and development strategies , online learning
      Measuring learning impact is something that organisations sometimes shy away from – even now, with the growth of data analytics, it’s a difficult thing to get right. But as professional learning leaders, it’s an area where we should be expending our best efforts.
       
      There are two key reasons why it’s so hard:
      1. A learning programme can impact on many aspects of a business such as performance, culture, competitiveness, adaptability, inclusivity, personal development and more, making it difficult to evaluate;
      2. A learning programme is usually part of a bigger change initiative, so if employees do change their behaviour, it can be difficult to know how much of that change to attribute to the learning.
      For these reasons, L&D teams, and even learning providers, are sometimes reluctant to commit to a promise of success. But, with solid, upfront discovery work, a learning solution can be designed with outcomes that can be meaningfully evaluated and that feed up to an overall measurable goal.
       
      We tend to think of impact in terms of ROI or business performance, but in reality, the measures of impact will depend on the business need and drivers, and the expected role of learning. Your discovery should surface this. Sometimes, the learning is aimed at changing behaviour; sometimes the learning is part of something bigger – and the role of the learning might be to raise awareness, increase knowledge, or even generate engagement or enthusiasm. By asking questions and drilling into the detail, you can determine the role it’s meant to play, and how this relates to business performance.
       
      You can then be very specific in your measurement. If it’s about knowledge, check for knowledge. If it’s retained knowledge over time, check for knowledge over time. If it’s about behaviour change, check for behaviour change. 
       
      When you’re clear what to measure, think about how to measure it – will you use surveys (and if so, how will you write them)? Will you use longitudinal self-reporting measures, assessed role-plays, peer discussion and feedback, or what? Whatever you do, you’re going to generate some data, so think about how you can capture that data and analyse it efficiently. 
       
      And finally, measuring impact is only useful if you plan to do something with the result. By all means, use it as part of your marketing and to sell your services to the business, but also be open about how you will use it for reporting, and for driving future improvements.
       
      To give you a flavour of how we tackle this, here’s a recent example:
      We developed a learning experience for a global retail company. It was a significant investment for the organisation and had an overall objective of increasing advocacy for its products among salespeople across the world, and ultimately driving an increase in global sales.
       
      Some key challenges included:
      • Many of the learners would not be employees of the company itself and had no, or limited, company awareness or specific product knowledge
      • Many non-employees had no vested interest in the success of the company or its training
      • None of the training was, or even could be made, mandatory
      • Proving a direct link between optional, self-driven learning activities and an increase in global sales – always the ultimate goal in retail - was always going to be a tall order. 
      What we wanted to do was create an experience that would act as a solid foundation for a continuous learning cycle, providing the ability to iterate and grow the solution based on learner feedback and engagement, store and sales manager feedback, and advocacy results from mystery shopping exercises.
       
      From one measurable business goal, we fleshed out five clear learning outcomes that directly supported it. For each learning outcome, we developed a series of interleaved learning activities that repeated, tested and applied the knowledge and skills related to that outcome, that the learner could visit and revisit over time, in their own time.
       
      The learning app provides opportunities to like, share, discuss and challenge, providing more than just completion metrics. Challenge activities include learners recording themselves applying new skills and uploading their videos on the app. The pathways through the learning activities can be refreshed and augmented with new and updated content.
       
      As the solution rolls out globally into 2023, the plan is to measure engagement (completions, time, interactions), satisfaction (survey results, likes, manager feedback), learning (task results, shared posts), behaviour (advocacy, expertise, market feedback) and business (mystery shopping results, sales, client satisfaction, brand image). The metrics will be gathered soon after rollout, at 6 months and at 12 months. The learning activities, and the solution as a whole, will evolve, based on the findings.
       
      To conclude, there are reasons why it’s hard to measure learning impact, but by being clear about your aims, your measures, and your processes upfront, it will be much easier to demonstrate the value of your learning initiatives.
       
      If you’re looking for some of the most common learning evaluation models to understand the levels that can be achiever, you can find them here

       

      Interested in a learning strategy that works? Talk to us today 


      Was this article helpful?

      Related posts

      We’ve all seen it unless you’ve been living under a stone. AI (Artificial Intelligence) is everywhere, and most platforms and apps are marketing the..
      Richard Whiteside
      Read Article

      Traditional eLearning can often be challenging for anyone who processes information differently from the ‘neurotypical’ majority. Is it possible to..

      Andrew Heap
      Read Article
      A recent survey by Degreed found that 26% of employees agreed that their managers "did not meaningfully support" their development. That's not good..
      Richard Whiteside
      Read Article

      Get in touch