How to Evaluate Learning: Kirkpatrick Model for the 21st Century—A Revision

Dashe & Thomson

I was asked by Wendy Kirkpatrick to remove the copyrighted Kirkpatrick diagrammatic model from my original blog post, How to Evaluate Learning: Kirkpatrick Model for the 21st Century. Kirkpatrick calls this Return on Expectations, or ROE.

To “Kirkpatrick” or not to “Kirkpatrick”, that is the Question (or is it?)

Learning Rebels

To “Kirkpatrick” or not to “Kirkpatrick”, that is the question. Many a person has debated the Kirkpatrick evaluation taxonomy. To name a few: Dan Pontefract: Dear Kirkpatrick’s: You Still Don’t Get It (a personal favorite).

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Pernicious problems

Clark Quinn

What does matter is that there are two problems in their standard that indicate we still haven’t overcome some pernicious problems. Do you see the problem here? The second one is also problematic, in their standard for evaluation: Reports typical L&D metrics such as Kirkpatrick levels, experimental models, pre- and post-tests and utility analyses. What you’re measuring here is a delta, and the problem is, you would expect a delta.

Vale Don Kirkpatrick

Clark Quinn

Last week, Don Kirkpatrick passed away. There were two major problems, however. The second problem was that whether or not he intended it (and there are reasons to believe he didn’t), it become associated only with training interventions. Kirkpatrick rode this tool for the rest of his career, created a family business in it, and he wasn’t shy about suggesting that you buy a book to learn about it.

MTA: Why the Kirkpatrick Model Works for Us

CLO Magazine

In early 2012, the MTA had a problem — too many of its recently trained bus drivers were involved in accidents. That summer, Michael Wiedecker, a 25-year veteran of the MTA, was appointed director of operations training, which meant this became his problem to solve.

How to Evaluate Learning: The Kirkpatrick Model for the 21st Century

Dashe & Thomson

Kirkpatrick’s revised “Four Levels of Evaluation” model, what we need to do is find out what success looks like in the eyes of these senior managers and stakeholders and let them define their expectations for the training program.

How L&D can embrace digital to solve fundamental training problems

Matrix

L&D departments can embrace the new and constantly developing digital technologies to solve — or at least better respond to — fundamental problems of training. Digital solutions are constantly being developed to solve exigent problems of any aspect of business.

Time to retire Kirkpatrick?

From the Coleface

When you first turned your thoughts to evaluating training, it’s odds-on that a colleague or consultant recommended you to have a look at the Kirkpatrick model. There is a commonly held misconception that you need to do Kirkpatrick Level 1 first and then Level 2 etc. Anyone who pretends that Kirkpatrick is the whole answer to evaluation should take some time to re-evaluate.

What's the problem with Kirkpatrick?

Clive on Learning

While acknowledging that Kirkpatrick's four levels of evaluation had proved valuable over more than thirty years in helping to measure instructor-led, content-based 'training' interventions, Sloman felt that a new approach was necessary to support a more self-directed, work-based 'learning' process. It's worth checking first of all whether Kirkpatrick's model is still useful for evaluating the top-down stuff. So does Kirkpatrick's model apply as well in a learner-centred context?

The Learning Alliance and the Four Levels of Training Evaluation

The Performance Improvement Blog

In response to my blog post titled, “Kirkpatrick’s Four Levels of Training Evaluation: A Critique” , Wendy Kirkpatrick wrote a comment directing me to a white paper that she co-authored with Jim Kirkpatrick, "The Kirkpatrick Four Levels: A Fresh Look After 50 Years 1959 - 2009."

Alternatives to Kirkpatrick

bozarthzone

While the Kirkpatrick taxonomy is something of a sacred cow in training circles—and much credit goes to Donald Kirkpatrick for being the first to attempt to apply intentional evaluation to workplace training efforts—it is not the only approach. Apart from being largely atheoretical and ascientific (hence, 'taxonomy', not 'model' or 'theory'), several critics find the Kirkpatrick taxonomy seriously flawed. More later on alternatives to the Kirkpatrick taxonomy.

How to Measure Online Course Effectiveness

CourseArc

Luckily, there’s a proven process that helps you measure the effectiveness of your courses and start to fix any problems in their delivery. Kirkpatrick’s Four-Level Approach to Assessing Training Outcomes.

Measuring Training Effectiveness Through Gaming

Dashe & Thomson

Should we sit the client through a well-worn PowerPoint covering Kirkpatrick’s method, knowing that by the time the project is nearing its completion there will most likely be neither time nor budget for such frivolousness? So did Skyrim -as-training score well on the Kirkpatrick model?

Games 224

Evaluating Social Learning

Dashe & Thomson

There are people looking at applying the Kirkpatrick model, there are people measuring the use of social learning tools, and there are people talking about something similar to Brinkerhoff’s Success Case Method. He appears to be using a variation of the Don and James Kirkpatrick revised model.

Conducting Post-Course Evaluations

CourseArc

The industry standard Kirkpatrick model measures training based on the four levels of analysis: Level 1: Did the learners enjoy training? For example, managers may notice that an employee now approaches problems differently, or can solve issues faster.

Front-End Analysis: Backward Analysis and the Performance Gap

Dashe & Thomson

Don Clark, on his Big Dog, Little Dog: Performance Justification blog post “Analysis” says that the Japanese approach to performance improvement is to ask “why” five times when confronted with a problem or a desire to improve a part of an organization.

Avoid and Correct Employee Evaluation Pitfalls

CLO Magazine

Using diverse methods like surveys and focus groups to collect data from multiple sources, we are able to capture more truth and provide a comprehensive picture of training impact across all four Kirkpatrick levels. A common organizational problem is the proliferation of data.

It’s Time to Rethink the Value of Training and Development

CLO Magazine

Companies are increasingly providing a diverse range of learning resources, such as in-person conferences, live online courses and self-directed access to problem-solving, as they recognize the importance of supporting their employees through accessible, topical training and development.

Banishing Evaluation Fears

CLO Magazine

So why does the problem still exist? There is also reticence to evaluate the degree to which participants apply what they learned in training when they are back on the job, or what we refer to as behavior (level 3) in the Kirkpatrick Model (see figure on p.

Impact and ROI of Learning: Worth Pursuing or Not?

CLO Magazine

This subjective estimate of impact should be no problem for senior leaders as long as the results are presented conservatively and with humility. Tags: ROI , employee engagement , Kirkpatrick Model , learning impact , Phillips ROI Methodology , Return on investment.

ROI 67

Measuring Success (ROI) of a Training MOOC, Part 2

Your Training Edge

In the previous post, I outlined the four-level model of evaluation developed by Donald Kirkpatrick. See here for a more in-depth look at how some companies are using MOOCs to solve real business problems. CPD Featured Posts LMS MOOC Talent Management Training Kirkpatrick model MOOCs

ROI 141

New and improved evaluation

Clark Quinn

A few years ago, I had a ‘debate’ with Will Thalheimer about the Kirkpatrick model (you can read it here ). In the debate, I was lauding how Kirkpatrick starts with the biz problem, and works backwards.

Is this thing on? Tips for measuring course effectiveness and return on investment

Obsidian Learning

The Kirkpatrick four levels of training evaluation. A quality checklist not only helps you spot and correct problems before the learner sees them, it also, as Robert Mager (1997) reminds us, helps course designers identify opportunities for course improvement. Kirkpatrick, D.

Ratio 80

Using Kirkpatrick's Four Levels to Create and Evaluate Informal & Social Learning Processes

Big Dog, Little Dog

This is the real value of Kirkpatrick's Four Level Evaluation model as it allows us to take a number of measurements throughout the life span of learning process in order to place a value on it, thus it is a process-based solution rather than an event-based solution. Note that this post uses as actual problem that is based on informal and social learning for the solution. One of the mistakes Kirkpatrick made is putting too much emphasis on smiley sheets.

What Do You Do With Your Evaluation data?

Adobe Captivate

Donald Kirkpatrick created the four-level model for training evaluation, which most organisations claim to cherish. For those unfamiliar, the four levels are as follows. Reaction – this answers the question what did the learners think about the training.

How to Improve Learning Retention with Contextual Feedback

CourseArc

Or, as George Bernard Shaw once famously said: “The single biggest problem in communication is the illusion that it has taken place.”. If you want to learn more about Donald Kirkpatrick and his training evaluation model, visit [link].

Levels of Design

Clark Quinn

In a recent conversation, we were talking about the Kirkpatrick model, and a colleague had an interesting perspective that hadn’t really struck me overtly. Kirkpatrick is widely (not widely enough, and wrongly) used as an evaluation tool, but he talked about using it as a design tool, and that perspective made clear for me a problem with our approaches. So, there’s a lot of debate about the Kirkpatrick model, whether it helps or hinders the movement towards good learning.

Design 160

Starting from the end

Clark Quinn

Week before last, Will Thalheimer and I had another one of our ‘debates’, this time on the Kirkpatrick model (read the comments, too!). The reason I like the Kirkpatrick model is it emphasizes one thing that I see the industry failing to do. The problems with Kirkpatrick are several. Finally, too often Kirkpatrick is wrongly considered as only to evaluate training (even the language on the site, as the link above will show you, talks only about training).

Level of ‘levels’

Clark Quinn

I was defending Kirkpatrick’s levels the other day, and after being excoriated by my ITA colleagues, I realized there was not only a discrepancy between principle and practice, but between my interpretation and as it’s espoused. Both of these fail to understand the intention: Kirkpatrick (rightly) said you have to start at level 4. Consequently, any mention of Kirkpatrick only reinforces the notion that courses are the salve to all ills.

Measuring Training Program ROI

LearnDash

The problem is that not everyone is prepared to effectively measure training. At the very least, a robust evaluation system like the Kirkpatrick model should be used.

ROI 209

Revisiting 70:20:10

Clark Quinn

To start, 70:20:10 is a framework, it’s not a specific ratio but a guide to thinking about the whole picture of developing organizational solutions to performance problems. The approach goes from a problem to a solution that incorporates tools, formal learning, coaching, and more.

Ratio 204

Commonly Used Training Evaluations Models: A Discussion with Dr. Will Thalheimer

Convergence Training

You can see there’s some problems out there. Four Common Learning Evaluation Models–Kirkpatrick, Kaufman, Philips & Brinkerhoff. Well, of course, the most common, the most well-known, is the Kirkpatrick four-level model. And most of you know the Kirkpatrick model.

A ‘Field of Dreams’ Industry

Clark Quinn

But it’s clear that the problem is worse; the evidence suggests that L&D overall is in a ‘Field of Dreams’ mentality. A new report (in addition to the two I cited last week) documents the problems in L&D.

How do you measure microlearning?

Axonify

Learning and development (L&D) professionals have been dealing with problems measuring the effectiveness of their programs for decades. Most L&D pros can’t get past level 2 of the Kirkpatrick Model because measuring a traditional learning program takes SO MUCH effort. Microlearning solves the problems with traditional measurement. Before you build a learning program, you have to know what problem you’re trying to solve.

Stop Evaluating Training!

Upside Learning

Kirkpatrick’s evaluation model has long been the holy grail of training effectiveness measurement to businesses. "So how do you evaluate the success of eLearning that you create?"