The right stuff

Well that was unexpected.

When I hit the Publish button on Not our job, I braced myself for a barrage of misunderstanding and its evil twin, misrepresentation.

But it didn’t happen. On the contrary, my peers who contacted me about it were downright agreeable. (A former colleague did politely pose a comment as a disagreement, but I happened to agree with everything she stated.)

I like to think I called a spade a spade: we’re responsible for learning & development; our colleagues are responsible for performance; and if they’re willing to collaborate, we have value to add.

Bar graph showing the impact of your ideas inside your brain much lower than the impact of your ideas when you put them out there.

The post was a thought bubble that finally precipitated after one sunny day, a long time ago, when Shai Desai asked me why I thought evaluation was so underdone by the L&D profession.

My post posited one reason – essentially, the inaccessibility of the data – but there are several other reasons closer to the bone that I think are also worth crystallising.

1. We don’t know how to do it.

I’m a Science grad, so statistical method is in my blood, but most L&D pro’s are not. If they haven’t found their way here via an Education or HR degree, they’ve probably fallen into it from somewhere else à la Richard in The Beach.

Which means they don’t have a grounding in statistics, so concepts such as regression and analysis of variance are alien and intimidating.

Rather than undertake the arduous journey of learning it – or worse, screw it up – we’d rather leave it well alone.

2. We’re too busy to do it.

This is an age old excuse for not doing something, but in an era of furloughs, restructures and budget freezes, it’s all too real.

Given our client’s ever-increasing demand for output, we might be forgiven for prioritising our next deliverable over what we’ve already delivered.

3. We don’t have to do it.

And it’s a two-way street. The client’s ever-increasing demand for output also means they prioritise our next deliverable over what we’ve already delivered.

If they don’t ask for evaluation, it’s tempting to leave it in the shadows.

4. We fear the result.

Even when all the planets align – we can access the data and we’ve got the wherewithal to use it – we may have a sneaking suspicion that the outcome will be undesirable. Either no significant difference will be observed, or worse.

This fear will be exacerbated when we design a horse, but are forced by the vagaries of corporate dynamics to deliver a camel.

A woman conjuring data from a tablet.

The purpose of this post isn’t to comment on the ethics of our profession nor lament the flaws of the corporate construct. After all, it boils down to human nature.

On the contrary, my intention is to expose the business reality for what it is so that we can do something about it.

Previously I’ve shared my idea for a Training Evaluation Officer – an expert in the science of data analysis, armed with the authority to make it happen. The role builds a bridge that connects learning & development with performance, keeping those responsible for each accountable to one another.

I was buoyed by Sue Wetherbee’s comment proposing a similar position:

…a People & Culture (HR) Analyst Business Partner who would be the one to funnel all other information to across all aspects of business input to derive “the story” for those who order it, pay for it and deliver it!

Sue, great minds think alike ;-)

And I was intrigued by Ant Pugh’s Elephant In The Room in which he challenges the assumption that one learning designer should do it all:

Should we spend time doing work we don’t enjoy or excel at, when there are others better equipped?

Just because it’s the way things are, doesn’t mean it’s the way things should be.

I believe a future exists where these expectations are relinquished. A future where the end result is not dictated by our ability to master all aforementioned skills, but by our ability to specialise on those tasks we enjoy.

How that will manifest, I don’t know (although I do have some ideas).

Ant, I’m curious… is one of those ideas an evaluation specialist? Using the ADDIE model as a guide, that same person might also attend to Analysis (so a better job title might be L&D Analyst) while other specialists focus on Design, Development and Implementation.

Then e-learning developers mightn’t feel the compulsion to call themselves Learning Experience Designers, and trainers won’t be similarly shamed into euphemising their titles. Specialists such as these can have the courage to embrace their expertise and do what they do best.

And important dimensions of our work – including evaluation – won’t only be done. They’ll be done right.

7 thoughts on “The right stuff

  1. Morning Ryan…. Learning analysis and impact has always been my sweet spot…and I know when and if done well, L&D (or whatever the function is called) will gain a seat and a louder voice at the table. The L&D role is multifaceted and due emphasis needs to be put on the A&E of ADDIE for it to become a respected “force”!

    Love your posts….. must catch up with you and the old “crew” soon….. a la Friday lunch?

    Take care Sue

  2. Good post! Evaluation, IMO, would be more meaningful and straightforward if it was “tied at the hip” to the initial Analysis of the Performance in focus – Outputs & Tasks and the Stakeholder Requirements for both – which too often isn’t the focus of way too many L&D initiatives. The focus is unfortunately on what Tom Gilbert called The Cult of Behaviors (in his 1978 book: Human Competence), or on its close cousins: Topics, and Tasks Sans Outputs. All with “Face Validity” mind you – but lacking “How To Apply” them in the Learner’s Performance Context back on the job. Thus Formal Learning falls short – forcing everyone into Informal Learning – which may eventually be effective but never efficient. Cheers!

  3. I’ve long held the view that if you want something to be done, you need to bake it into the process. Guy, I know you are a champion of the process – thanks for your comment!

  4. I was thinking as I read this that evaluation is set up at the start of the process so, when it comes time to evaluate after the “event”, the analysis is almost, if not, automatic. If the team had a LA (Learning Analyst) they’d be in their element before, during and after the process. Exciting times!

  5. Oh and one more thing, training/learning/development is a business topic, not an L&D topic. Where the business evaluates training as having missed the mark when it comes to evaluation, in many cases, it’s shame on them, not so much the L&D team! A business manager who “delegates” training isn’t earning their salary!

  6. I whole-heartedly agree Bill that evaluation should be set up at the start of the process. One of the misconceptions of ADDIE is that it’s linear with the “E” at the end. On the contrary, I think it should complement the Analysis in the beginning, so that you clarify not only what the problem is but also what success looks like.

    And it’s music to my ears when you say that training is a business topic, not an L&D topic. In my view, a manager who remains at arms length with training – particularly its evaluation – is shirking their responsibility to manage the performance of their team.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.