Skip to content
man contemplating at screen

Data & learning: Is learning analytics stuck?

There seems to be a consensus among thought leaders, analysts, and high-profile practitioners that L&D needs to up its game on data if it is to improve its standing within organizations.

More specifically, it needs to show that its activities result in tangible, measurable performance improvements.

In his report on the 2020 Learning and Development Global Sentiment Survey, Donald H. Taylor wrote:

“For L&D practitioners to use analytics to best effect, they need to correlate data around learning activity with data from the business. In the words of Trish Uhl, this approach allows them to ‘prove the value of, and improve the effectiveness of’ their work.”

In voicing this analysis, Don Taylor draws not only on the data he has gathered over a number of years through this survey but also on a substantial body of opinion in the industry.  Time and time again we have heard similar sentiments aired in blogs and white papers and from conference platforms. L&D needs to prove its worth by using data to show that it can move the needle in terms of metrics that mean something to the business. 

The argument is often couched in terms of an existential threat: if L&D can’t up its game on data, it will become irrelevant, side-lined, and ultimately cease to exist. 

Why are we stuck?

There is speculation among the L&D community concerning the three main reasons for the lack of progress on learning analytics. Recently, it has been down to the following factors:

  • L&D doesn’t have the necessary skills
  • There’s something wrong with the evaluation models L&D uses 
  • The culture in organizations militates against it

It is possible that all three of these candidate reasons play a part in together creating a situation where there is more threat than an opportunity for L&D in evaluating impact and thus it doesn’t happen. Also, it is perilous to generalize about a subject that covers so many different organizations in different business sectors, each with its own highly specific needs and cultures.

However, it does need to be considered that some reasons have less credibility in the role of a smoking gun. They might be blockers, but they probably aren’t showstoppers.

It is also important to acknowledge how businesses use their operating data, given the large amount of data available, and the pressures on business leaders’ time, the presentation has to be brief and relevant.

Given these pressures, businesses aren’t, generally speaking, that interested in things that happened two months ago (the impact evaluation of your training program, for instance). What they want to know badly are things like “Are we going to make our numbers this year-end/quarter/month?”

So if it isn’t the lack of usable tools that is to blame or a dearth of professional skills to use them, what then is the reason we don’t evaluate? The culture within organizations would seem to be the biggest stumbling block, the showstopper; a set of assumptions about what training is and what its outputs might be that stops evaluation from happening. The business does not require it, and so it doesn’t happen. And L&D is seen to collude in perpetuating the situation. The conspiracy of convenience looks to be the culprit.

Download our new whitepaper Data & Learning: A common-sense approach, to find out more.

About the author

Alongside our CEO, Ben Betts, this blog and the rest of the ‘Data & learning’ series has been authored by writer, speaker, podcaster and Communications Consultant, John Helmer.

Got a learning problem to solve?

Get in touch to discover how we can help

CTA background