Hone

Demystifying Learning Science & Debunking Learning Myths with Will Thalheimer, Part 2

What's covered

In this episode, Tom Griffiths is joined again by Dr. Will Thalheimer, a world-renowned L&D practitioner, researcher, and author with nearly four decades of expertise. In part two of their discussion, Tom and Will discuss how to collect data that is actually meaningful, common measurement mistakes L&D professionals make, how to leverage the Learning Transfer Evaluation Model (LTEM) to guide your evaluation strategy. Join them for a thought-provoking discussion where Will pulls insights from his groundbreaking work, including his award-winning books, "Performance-Focused Learner Surveys" and the forthcoming "CEO’s Guide to Training, Learning, and Work."

About the speakers

Podcast_Ep10-Headshot_240220

Dr. Will Thalheimer

World-renowned L&D practitioner, researcher, and author

Will Thalheimer is a learning expert, researcher, instructional designer, business strategist, speaker, and author. He has worked in the learning and performance field since 1985. In 1998, Will founded Work-Learning Research to bridge the gap between research and practice, compile research on learning, and disseminate research findings to help chief learning officers, instructional designers, trainers, e-learning developers, performance consultants, and learning executives build more effective learning and performance interventions and environments. He speaks regularly at national and international conferences. Will holds a BA from the Pennsylvania State University, an MBA from Drexel University, and a PhD in educational psychology: human learning and cognition from Columbia University.

TomGriffiths-1-300x300-1-1

Tom Griffiths

CEO and co-founder, Hone

Tom is the co-founder and CEO of Hone, a next-generation live learning platform for management and people-skills. Prior to Hone, Tom was co-founder and Chief Product Officer of gaming unicorn FanDuel, where over a decade he helped create a multi-award winning product and a thriving distributed team. He has had lifelong passions for education, technology, and business and is grateful for the opportunity to combine all three at Hone. Tom lives in San Diego with his wife and two young children.

Episode transcript

Tom Griffiths: This is Learning Works, a podcast presented by Hone. It's a series of in depth conversations with L& D experts, HR leaders, and executives on how they've built game changing learning and development strategies, unleashed business growth through talent development, and scaled their global L& D teams. Tune in for the wisdom and actionable insights, the best in the industry.

I'm Tom Griffiths, CEO of Hone. Welcome to Learning Works.

All right. We are back for part two of our conversation with Will Thalheimer, a legend in the learning space and real expert in translating learning science and research to actionable practices when it comes to learning design in the workplace. Will, thanks again for joining us. Ah, it's my pleasure, Tom.

Excited to jump into part two of the conversation. In part one, we covered a lot. of breadth around the state of learning research and how it's making its way into the workplace. We debunked some common myths in the L& D space, and I encourage you if you haven't checked that out to go have a listen to part one.

In part two, we're going to dive into putting Will's ideas into action. And so Will, if there's one aspect of L& D that you're most associated with, it's probably learning measurements. We've had many conversations between Hone and yourself on this over the years and If it's possible, we'd love you to try and summarize your overall measurement philosophy as we kick off this part of the conversation.

Will Thalheimer: That's where I start. What's it for? And the answer is, we don't want to just collect a lot of data and create a report, right? We want to collect data that's meaningful, that we can use. So we want it to help us make better decisions and take better action. So. My overall philosophy is that, that we ought to be gathering data, evaluating learning so that we can do better in our work.

Very simple. There's, I often talk about the three reasons you might measure learning. One is to prove the value of the learning, to demonstrate the value. Number two is to support our learners in learning directly. And number three is to improve the learning, while also keeping what's good. So there's three buckets.

Improve, support the learners, and improve. Really valuable. They're all, they're, you know, they're all important. What I come down to though, I feel like the third one is foundational. If we improve, The learning by doing good learning evaluation, then we're going to certainly support our learners better, and we're going to create more value, and when we create more value, we'll be able to demonstrate or prove the value more easily.

There's a bunch of other specifics, but those are the main things. We're doing this for a reason. We don't want to be in the dark, and for too long in the field, we have been in the dark. Our two main ways that we evaluate learning are measuring attendance, or butts in seats, or completion rates, and using learner surveys, and poorly designed learner surveys, what I've been calling smile sheets.

And those do not give us enough good feedback about how well we're doing. And so we create sort of superstitious behaviors. Oh, the learners loved it. It must be good. Mmm. Nah. No. That's not a good assumption. That's not a fair assumption. So really, just trying to figure out a better way to do things.

One of the things I teach in all my workshops is, Hey, there's no perfect measurement. No perfect learning measurement. Let's just do better than we're doing now. Let's start there. So we don't have to be perfect, but we can try to be better so that we get better information so we can make better decisions.

Tom Griffiths: I think that's such a crucial point. And yeah, I agree with the overall three reasons. But when you're actually in the wild trying to do learning measurement, you can have extreme viewpoints. One is I need to know the dollar value of every learning minute that happened and we need to measure everything.

And at the other extreme, got folks to say, well, it's impossible to measure this stuff. There's no way, there's no point in even trying and I think that how you're balancing that by saying let's just continually improve how we're doing it in pursuit of, you know, more coverage for sure of what we're able to measure, but also not throwing out the idea that we can measure something and continue to get better.

I think that's a great point. I mean, have you come across places where it's valid to say, Okay. We can't measure this in any way, or have you always been able to find a way to get some kind of a directional sense of whether things are working? 

Will Thalheimer: Yeah, you can always find some way to measure it. Right. You might not be fully happy with it.

And a lot of it's about about what you need, right? So one of the, one of the exercises I take people through is, you know, who are your stakeholders? What are their goals? What are you trying to accomplish with your learning evaluation, with your learning and with your learning evaluation? What counts as evidence in your organization?

What counts as evidence and work through some of that. But as I said in the first part of our episode, you know, sometimes learning teams and even Our business stakeholders, our organizational stakeholders, they need a little bit of background on this so they can think intelligently about a learning evaluation.

Tom Griffiths: Yeah, absolutely. I mean, on that theme, what do you see people get wrong most often when it comes to learning measurement?

Will Thalheimer: Well, the big things are, they think that their traditional learner surveys are giving them good information. Right. And this, there's like people, the scientists have studied this.

In fact, there's two meta analyses that covered 150 scientific studies together, and these were published in scientific referee journals, and they found that traditional smile sheets are correlated with learning results at 09, and what statisticians tell us is anything below 30 is a weak correlation, so 09 is virtually no correlation at all.

So, from a practical standpoint, if you get high marks On your smile sheets, you could have an effective course, but almost equally likely an ineffective course. If you get low marks on your smile sheet, you can have a poorly designed course, but almost equally likely a well designed course. So with traditional smile sheets, we just can't tell, you know, when I first saw that research, my first instinct is probably like a lot of your listeners.

Well, we should not use them, but. You know, then I thought, well, wait a minute. We've been doing this for decade after decade. It's a tradition. It's also respectful to ask your learners what they think. And so the question then becomes, can we develop a better learner survey? And obviously I wrote a book on that.

I actually wrote it twice, second edition now. And so my answer is yes, we can create better learner surveys. Now, that's not all we should be doing. We can create learner surveys that give us some hints about whether the learning was effective or not. Yeah. Yeah. So that's one of the big things.

The learner surveys is a big problem. Another big problem I see out there is too much of a focus on trying to prove the value and you know, that's not a bad thing necessarily, but one of the things that we miss when we do that is we don't evaluate learning factors. So if you think about going from, you know, learning to results, you know, there's a sort of, there's a whole series of things that have to happen.

So first of all, you got to have good content. If you got bad content, then you're teaching them bad stuff and that's not good. So you have to have good content. The learners have to pay attention to it, but they don't have to just pay attention to it. They have to pay attention to it in a way that will help them learn.

Then they have to comprehend it clearly. They've got to get the concepts clear in their head, they have to have the right mental models from it. And then, you hope after that, that they are motivated to apply what they've learned, that they've given practice in decision making, that they have some supports for remembering.

Because if you comprehend it, but you don't remember it next week, that's no good. And then some supports for action, you know, are people resilient when they hit obstacles, are they prepared for those? And so that's the learning side. And then you get to the work situation side and are there supports for people there when they're trying to apply it?

Are there, has the learning been designed to trigger certain thoughts and actions? And then they put it into practice. Is it working? Are they successful? And then are they getting the results? So you can see this. Long causal pathway. If you only measure at the upper levels of this, if you only measure like the results or the behavior, then you don't know about where the breakdowns might have been.

You don't know what's working, so I think we become too blind if we think, Oh, we just need to prove the value. We just need to look at performance and results because then we're missing the things that we as learning and development professionals The things we have leverage over, things we can control.

So that's the other big thing I see as a blind spot. 

Tom Griffiths: Yeah. Yeah, so instructive. Like you say, there's a. multi step causal chain from learning experience through to business impact. And if you're trying to short circuit that too much, you can lose the opportunity to see where things are breaking down.

You can lose the validity of the causal attribution of the learning event to the business outcome where there may have been other conflated factors. So it's important to look at all of those steps. I think that's great. What is your latest thinking on Our ability is learning professionals to get learning mapped all the way to dollars because we're living in a period where many businesses are tightening budgets and being more bringing more scrutiny to training and development budgets in particular.

And CFOs are demanding, I need to see the ROI of everything. And the more you can do that to dollars, the easier that conversation is. What's your view on how possible that is in, and how perhaps that varies by the different types of learning that we can do? 

Will Thalheimer: Well, it's definitely possible. And, you know, Jack and Patty Phillips have been at the forefront of this and they do really good work, but it's not always easy.

There's some there's some folks out there that. Now, ideally, what we'd like to have is real numbers, right? The real business numbers we'd like to be able to track. I've had this experience many times people come to me and say, Hey, Will, can you help us with this learning design or learning evaluation we're thinking of doing a, we want to prove the value.

Can you help us with that? I say, sure. And then as we get talking about it and I talk about, well, okay, we're going to have to decide, you know, to do, to make sure that it was the training that made a difference and not the economy or a new management or a new product or whatever. We're going to have to use a randomized controlled trial, or pre test to post test, or time series analysis.

And we're going to have some learners that take the learning and some that don't. And you start talking about the logistics and the costs, and then people say, Oh no, we don't really want that. So, there are some very deep complications. And costs when you really want to do an ROI study. And one of the things we ought to think about is the ROI of our ROI evaluation initiatives.

Definitely. It's all doable. We just want to prioritize. I'll give you an example. I was working with a client and we'd gone through several steps in the process. And at the very last meeting before we started developing the learning evaluation metrics, they invited a new person into the meeting. And the whole goal of the project had been to to record the KPIs, look at the KPIs, and that kind of thing.

And this person must have been a high senior person in the organization, because this person just said to all the people in the meeting who had already been convinced, And who basically came to me wanting to do an ROI KPI study. This person came in and said, you know, I just don't believe that you can really separate out the training impact from all the other factors that affect our KPIs.

And everybody said, Oh yeah. Okay. So let's not do that. What do we do now? So we had to come up with something different because they, at the end of the day, some stakeholder, senior stakeholder in that organization. Convince them after we've done all this work, that maybe KPIs are not the things we were looking for.

We were able to keep a little bit on the KPIs, but we had to pivot to some other things as well. What I call KBIs, Key Behavioral Indicators. 

Tom Griffiths: Which is an interesting, useful analog. And I think. It serves the purpose of being more tightly coupled to the training that was delivered. Could you say a little bit about KBIs?

Will Thalheimer: So, so, you know, so key performance indicators typically get translated into things like revenues, costs safety incidents, that kind of thing. And those are very valuable to measure. Oftentimes you can, I was working with a digital marketing team once. And I was able to go online and get a list of all the KPIs using digital marketing.

They were using, you know, a few of them, but there was, you know, because we got these lists, they could brainstorm other ones that they might want to use. So in every field, there's KPIs that are out there. KBIs are things that are more behavioral, as the word says. And the things that you can sort of, they're usually not gathered by the organization, but they can be.

So for another project I was working on, this is in the mining industry, we developed KBIs around the safe, safety is really important in mines. And so we had some specific KBIs and we had about 10 of them. And then we narrowed it down to five key ones that we really wanted to focus on in the evaluation, but things that you can sort of wrap your heads around that, you know, that this is the kind of behavior that we're looking for.

Tom Griffiths: Yeah, that's great. And it's observable. And it's. And so you can see the impact more directly of someone who wasn't doing the behavior before the training experience and then is afterwards and how widespread that is. That's great. I know another innovation that you've brought to the space is the learning transfer, excuse me, Learning Transfer Evaluation Model or LTEM as an alternative really to the Kirkpatrick model that people may be familiar with.

I'd love to know what led you to want to create that more sophisticated model, and if you could talk us through it a little bit. 

Will Thalheimer: Just in case your listeners aren't familiar with the Kirkpatrick four level model, and I now call it the Kirkpatrick Cassell four level model, because it turns out that Raymond Cassell actually came up with the four level idea and then Donald Kirkpatrick put labels on it and popularize it.

So I feel like both men deserve credit. But the four level model says, Hey, we're going to measure at level one, the learner's reactions. Level two is learning. Level three is behavior and level four is results. And that model was born in the 1950s. Came out in some articles in the 1950s, 1960s, and it's been used ever since.

And it's probably still the most common model that's out there. So. You know, I've been in the field a long time and, you know, I'd seen complaints about the model over the years, but also I noticed that we had we as learning and development professionals had a number of frustrations. We weren't able to measure what we wanted to measure.

And I got involved in doing some surveying of our industry first with the E Learning Guild. Back in 2007, I think we started out and then I did some on my own to work learning research and then some even more with tier one and their learning trends report and overall, the most recent numbers are 65 percent of us as learning and development professionals are frustrated with the learning evaluation that we're able to do and so there was clearly chinks in the armor and then I saw some research reviews of the industry, the training industry that said, Hey, The Kirkpatrick model ignores 40 years of research on human learning, leads to a checklist approach, some really damning kind of critiques from researchers.

So I thought, well, it's been around a while between the time it was built. And now we've come through the cognitive revolution in learning psychology and psychology. A lot has changed. Maybe we could create a better model. So I set out to do that around 2016 or so. And I created like 11 different iterations of it.

And I asked a lot of smart people, learning evaluation experts like Rob Brinkerhoff, Ingrid Guerra-Lopez, and learning experts like Julie Dirksen and Clark Quinn, a bunch of really smart practitioners. And they said, no, Will, that's no good. And so I improved it and then published it in 2018. It's about five and a half years old now.

And what LTEM does, and my philosophy about models. is that they should nudge us to do good things, to think good thoughts, and to think appropriately about the thing that they're modeling, and to push people to do good things, but also to push people away from doing things that aren't so productive.

So, LTEM has eight tiers. The first tier is attendance, the second tier is learner activity, and LTEM basically says it's fine to measure those things, but don't think that you can validate learning based on those things, because people could attend. They could complete a course. They could be active participants, but they may learn the wrong thing.

They may not learn. They may learn and forget. So we can't really validate our learning based on that. Tier 3 is about learner perceptions, and that's about learner surveys. But there's other ways to get perceptions as well. You could use focus groups, et cetera. Tier 4 is knowledge. And Tier 4, 5, and 6 in LTEM really are measuring the learning itself.

So Tier 4 is knowledge, Tier 5 is decision making competence, and Tier 6 is task competence. So the thing that LTEM does that the four level model does not do is it brings some learning wisdom back into our discussions of learning evaluation. You know, if you put learning all in one bucket, you could be talking about the regurgitation of trivia, or the recall or recognition of meaningless information, or meaningful information, or decision making competence, or task competence, or skills.

And so that's a big disparity between trivia and skills, right? But when we have it all in one bucket, like the four level model does, Then we tend to forget and we say, Oh, we need a level two. Oh, let's use a knowledge check. And we know that knowledge checks aren't good enough because people can have the knowledge, but they may not know what to do, and they may not remember what to do, etc.

So, LTEM has tiers 4, again, are knowledge, decision making competence, task competence. And then we get into tier 7, which is transfer, that's behavior change. And tier 8 is the results. There's one sort of subtlety about Tier 8 that's different than the four level model. It reminds us that the organization is one of our stakeholders, but we also have potential stakeholders in our learners.

What effect does it have on them? Also a potential stakeholder is the learner's coworkers, their family, their friends, and the society, the community, the environments. It doesn't, LTEM doesn't say you need to measure all these things, but it reminds us that, hey, there are other stakeholders besides.

So I created LTEM as a way to bring some learning wisdom back into our discussions of learning evaluation. It's a long answer. 

Tom Griffiths: No, thank you. It's a really sound framework that we've drawn a lot of use and inspiration from. And I love how, as you say, it brings more learning science or research into a simple to use tool when it comes to evaluation.

Is your recommendation that people try to design their learning measurement approaches to tackle every level? How do you counsel people to think about that? 

Will Thalheimer: Ah, that's a, that is a really important question. No. Absolutely not. There's hardly any training program that you should measure at all eight tiers on LTEM.

And the reason is that evaluating takes time, money, costs. It's a, it's an investment. And like any investment, you want to use it wisely. LTEM does not suggest you do, you measure everything. But what it does do is it provides a map of things you can measure. And then you can brainstorm to think about, well, we could measure this at this tier.

We could measure this at this tier. And then you create this brainstorm and then you say, no, let's, what are the most important things we want to measure? That's really the vision that I had for LTEM in the beginning. One of the things that I didn't think about, but Elham Arabi did her doctoral dissertation on LTEM.

And she had this hypothesis, which I thought was really interesting. She said, listen, if I introduce LTEM to a learning team, two things will happen. One is they'll be inclined to use better learning measurement. And number two, they will be inspired to use better learning designs. And that second part is really interesting.

Why would that be? Well, it kind of makes sense, right? So if you decide we're going to, we're going to measure how well our learners are at being able to make realistic decisions. So the learning team is going to look at that and go, Oh, well, if we're going to be measured on that, we better build a lot of practice on realistic practice into our learning design.

And that's exactly what she found. She rolled this out in a hospital, a nurse professional development. And LTEM had these two impacts. It encouraged better learning evaluation and better learning design. 

Tom Griffiths: That's so impressive. And I think, yeah, as you say, when you take a look at the more, I'd say perhaps granular or specific levels of LTEM versus Kirkpatrick, you do want to have those things in mind as you're designing, just curious for that example, or others where you've seen it drive good behavior on learning design. Was it the task competence level that kind of jumps out as one that perhaps leads you to say, okay, I need to go beyond just like a quiz here where they know what they have to do, but I need to make sure I see them doing it or there are other levels that really stand out as positive drivers of good learning design.

Will Thalheimer: Well, it everyone else and discourages you from thinking that you can get away by measuring butts and seats, right? Yep. And the number of posts that people make in a discussion board or something. So it discourages it. You know, it's color coded. So those are tier one and tier two are red. So danger.

And so that's one thing I think it's like you said, I think it is tier six task competence, but also tier five decision making competence. Both of those things are really sort of realistic, right? They're important. We want people to be able to do those things. We design courses to help people perform differently.

Those things are clear measures of performance. So I think those are the things that really push people to see things differently. And by the way, you can use LTEM. People are using it in all different types of ways. So some people are using it to have conversations with their clients. Hey, you've taught, you want to talk to us about helping you build a learning program?

Let's look at LTEM first. We'll think about how we might evaluate this. What would you, what would count as evidence for you? People look at LTEM and they can brainstorm the things. And so it helps people clarify what their real expectations are. So there's lots of ways that people have been using Altem to help them do the learning and development.

Things that we need to do. 

Tom Griffiths: Yeah, absolutely. And that's something that we've experienced as well. To an earlier point you made, it helps to educate at the same time the stakeholder in what robust learning measurement can look like. And as you say, at the start of the project or engagement, you can agree what evidence would count at different levels and use that as a shared objective.

That's a really great use case. I've heard other folks. Using it to help A B test learning solutions. I would love to know how you recommend people do the A B testing element as it relates to LTEM. 

Will Thalheimer: So I'm a big believer in A B testing. In fact, there's a chapter in my new book on that. For those that don't know, A B testing is basically, well, there's a bunch of different ways to do it, but you're basically comparing two groups or two products or two versions.

And so I suppose, and I'd love to hear what. You know, how you're thinking about this, you know, you could use LTEM by sort of brainstorming. Well, what would we want to measure? I'm a big believer in tier five. So just measuring decision making competence, as I mentioned in our first episode, I got into this field and I did a lot of work with simulations and we use a lot of scenario questions to build those simulations out.

And those are really powerful. If you look at one, a scenario question, you look at one, you go, Oh yeah that's easy to create one of those. While they're not so easy to create as you think, but they're still relatively good bang for the buck. So let's just say you want to do A B testing and you're going to create these different scenarios.

And you'll tie the scenarios to your learning objectives, and maybe you'll have two questions on each learning objective or something like that. One of the things that I do in developing scenario questions and use for evaluation is I create an initial version for all the learning objectives I'm getting across.

So let's say I have 10 learning objectives, I would create 10 questions. And I would validate those with experts actually having them answer the question. And if they got the same right answer that I think is the right answer, we keep the question. If not, we either change the question or throw it out.

And then we create a clone question. So, you know, so in the one question, Sally's in the finance department. And so if we cloned it, we might have Joe in the marketing department. You change the background context, but not the meaningful stuff, just the. The sort of superficial stuff. And then what I would do is randomly assign the questions to a pre test or post test.

Okay, so that's just some of the mechanics of that. But in terms of an A B test, you might then, you've got your questions, they're validated, you've got a bunch of them. And so you might have two versions of your program. So instead of your learning team fighting about which version is better, you say, okay, well, why don't we create two versions and see which one works best.

Oh, great idea, then we don't have to fight. Yeah, we don't have to waste our time fighting. We don't have to let the status of the individuals decide what's right or wrong. We'll actually see what works better. And so then we would use our scenario questions at the LTEM 5 level, and we would see which program did better in creating learning as assessed by those questions.

Tom Griffiths: Yeah, that's great. I mean, it's kind of like a controlled trial in, you know, drug discovery to understand, you know, did this actually have an effect, or like you say, you can use it to optimize between two. design approaches. I think that's really cool. And if you're tracking it on multiple levels of the LTEM, then, you know, you can see perhaps where the differences lie and debug from there.

So I love that approach. Thanks for sharing. We're just about out of time here, Will, but we ask everyone this quickfire round at the end. When you think about giving advice to folks in the industry, a listener can often be. a learning leader just starting out in their career or perhaps a little bit further along, what advice would you give them to start doing something?

Will Thalheimer: Start doing something. So number one, use more research inspired practices. And number two, leverage the performance sciences, not just the learning sciences, but there are a lot of performance sciences that are bubbling up like habit science, nudging science. Performance triggering, network science, a bunch of those leverage learning in the workflow.

Let's do that in a way that works. Help managers be better stewards of learning. They are the organizational glue. They can really, they're sort of our force multipliers in our organizations. If we in learning can help our managers become better stewards of learning, that's great. Let's upskill ourselves as the learning team.

We're sort of like the cobbler's kids with the lack of good shoes. We get the least amount of learning in the organization sometimes, but let's upskill ourselves. There's a lot to learn. We should also start getting more outside objective feedback on our learning practices. You know. We go to the doctor every year to get checked out.

We bring in our car every year to get checked out. It's good to get checked out. And then finally, let's use better learning evaluation methods. If we do that, we can create virtuous cycles of continuous improvement. 

Tom Griffiths: Love it. Fantastic. Some good things to start there in our stop, start, continue rapid fire.

What about stop? What should people stop doing? 

Will Thalheimer: This may be controversial, but I think we should stop pandering to the C suite with dubious business-y sounding metrics. We have some serious evaluation approaches we can use. We don't need to be pretending to be business-y with, you know, certain business-y sounding metrics.

We don't need to do that. We should stop focusing on butts in seats and happy learners as our primary metrics. Finally, we should stop ourselves from making common learning design mistakes and using some of the myths. 

Tom Griffiths: Hear, hear! Agree with all of that. And thanks for sharing a bunch on how people can do better in our conversation so far.

Finally, what are people doing that they should keep doing and continue? 

Will Thalheimer: We should still be experimenting with technology. There's lots of new learning technologies bubbling up, including AI, generative AI. We should be experimenting with those. We should be improving how we do online learning. There's new tools out there, but also sort of we're in the pioneer phase.

I can see myself getting better at it and everybody else as well, so we should continue doing that. We should continue to utilize research inspired learning designs. We've been doing more and more of that. We should continue with that. And finally, we should continue building a strong team of learning professionals, helping each other, the new folks, we should educate them, bring them along.

There's a, you know, the learning profession is filled with a lot of dedicated, passionate people. We should continue supporting ourselves and getting better and better. 

Tom Griffiths: Right on. I agree with that. Well, thanks so much. You've given us a ton of wisdom over this conversation. You bring so much to the field.

It's an honor to be able to spend some time together, drilling into your expertise and somehow feel like we've only touched the tip of the iceberg in some of these topics. So we encourage everyone to go and check out the various ways you've put your wisdom out into the world over the years with the books and the website and perhaps, you know, connecting for conversations as well.

So, A big thanks for, from everyone here at the Hone team for your time, really enjoyed it. Thanks so much. 

Will Thalheimer: Well, thanks for inviting me, Tom. And thanks to the folks back at Hone. 

Tom Griffiths: Thanks for listening to Learning Works. If you've enjoyed today's conversation, we encourage you to subscribe to the podcast for our exciting lineup of future episodes.

Learning Works is presented by Hone. Hone helps busy L& D leaders easily scale power skills training through tech powered live cohort learning experiences. The drive real ROI and lasting behavior change. If you want even more resources, you can head to our website, honehq. com. That's H O N E H Q dot com for upcoming workshops, articles, and to learn more about Hone.

Tom Griffiths: This is Learning Works, a podcast presented by Hone. It's a series of in depth conversations with L& D experts, HR leaders, and executives on how they've built game changing learning and development strategies, unleashed business growth through talent development, and scaled their global L& D teams. Tune in for the wisdom and actionable insights, the best in the industry.

I'm Tom Griffiths, CEO of Hone. Welcome to Learning Works.

All right. We are back for part two of our conversation with Will Thalheimer, a legend in the learning space and real expert in translating learning science and research to actionable practices when it comes to learning design in the workplace. Will, thanks again for joining us. Ah, it's my pleasure, Tom.

Excited to jump into part two of the conversation. In part one, we covered a lot. of breadth around the state of learning research and how it's making its way into the workplace. We debunked some common myths in the L& D space, and I encourage you if you haven't checked that out to go have a listen to part one.

In part two, we're going to dive into putting Will's ideas into action. And so Will, if there's one aspect of L& D that you're most associated with, it's probably learning measurements. We've had many conversations between Hone and yourself on this over the years and If it's possible, we'd love you to try and summarize your overall measurement philosophy as we kick off this part of the conversation.

Will Thalheimer: That's where I start. What's it for? And the answer is, we don't want to just collect a lot of data and create a report, right? We want to collect data that's meaningful, that we can use. So we want it to help us make better decisions and take better action. So. My overall philosophy is that, that we ought to be gathering data, evaluating learning so that we can do better in our work.

Very simple. There's, I often talk about the three reasons you might measure learning. One is to prove the value of the learning, to demonstrate the value. Number two is to support our learners in learning directly. And number three is to improve the learning, while also keeping what's good. So there's three buckets.

Improve, support the learners, and improve. Really valuable. They're all, they're, you know, they're all important. What I come down to though, I feel like the third one is foundational. If we improve, The learning by doing good learning evaluation, then we're going to certainly support our learners better, and we're going to create more value, and when we create more value, we'll be able to demonstrate or prove the value more easily.

There's a bunch of other specifics, but those are the main things. We're doing this for a reason. We don't want to be in the dark, and for too long in the field, we have been in the dark. Our two main ways that we evaluate learning are measuring attendance, or butts in seats, or completion rates, and using learner surveys, and poorly designed learner surveys, what I've been calling smile sheets.

And those do not give us enough good feedback about how well we're doing. And so we create sort of superstitious behaviors. Oh, the learners loved it. It must be good. Mmm. Nah. No. That's not a good assumption. That's not a fair assumption. So really, just trying to figure out a better way to do things.

One of the things I teach in all my workshops is, Hey, there's no perfect measurement. No perfect learning measurement. Let's just do better than we're doing now. Let's start there. So we don't have to be perfect, but we can try to be better so that we get better information so we can make better decisions.

Tom Griffiths: I think that's such a crucial point. And yeah, I agree with the overall three reasons. But when you're actually in the wild trying to do learning measurement, you can have extreme viewpoints. One is I need to know the dollar value of every learning minute that happened and we need to measure everything.

And at the other extreme, got folks to say, well, it's impossible to measure this stuff. There's no way, there's no point in even trying and I think that how you're balancing that by saying let's just continually improve how we're doing it in pursuit of, you know, more coverage for sure of what we're able to measure, but also not throwing out the idea that we can measure something and continue to get better.

I think that's a great point. I mean, have you come across places where it's valid to say, Okay. We can't measure this in any way, or have you always been able to find a way to get some kind of a directional sense of whether things are working? 

Will Thalheimer: Yeah, you can always find some way to measure it. Right. You might not be fully happy with it.

And a lot of it's about about what you need, right? So one of the, one of the exercises I take people through is, you know, who are your stakeholders? What are their goals? What are you trying to accomplish with your learning evaluation, with your learning and with your learning evaluation? What counts as evidence in your organization?

What counts as evidence and work through some of that. But as I said in the first part of our episode, you know, sometimes learning teams and even Our business stakeholders, our organizational stakeholders, they need a little bit of background on this so they can think intelligently about a learning evaluation.

Tom Griffiths: Yeah, absolutely. I mean, on that theme, what do you see people get wrong most often when it comes to learning measurement?

Will Thalheimer: Well, the big things are, they think that their traditional learner surveys are giving them good information. Right. And this, there's like people, the scientists have studied this.

In fact, there's two meta analyses that covered 150 scientific studies together, and these were published in scientific referee journals, and they found that traditional smile sheets are correlated with learning results at 09, and what statisticians tell us is anything below 30 is a weak correlation, so 09 is virtually no correlation at all.

So, from a practical standpoint, if you get high marks On your smile sheets, you could have an effective course, but almost equally likely an ineffective course. If you get low marks on your smile sheet, you can have a poorly designed course, but almost equally likely a well designed course. So with traditional smile sheets, we just can't tell, you know, when I first saw that research, my first instinct is probably like a lot of your listeners.

Well, we should not use them, but. You know, then I thought, well, wait a minute. We've been doing this for decade after decade. It's a tradition. It's also respectful to ask your learners what they think. And so the question then becomes, can we develop a better learner survey? And obviously I wrote a book on that.

I actually wrote it twice, second edition now. And so my answer is yes, we can create better learner surveys. Now, that's not all we should be doing. We can create learner surveys that give us some hints about whether the learning was effective or not. Yeah. Yeah. So that's one of the big things.

The learner surveys is a big problem. Another big problem I see out there is too much of a focus on trying to prove the value and you know, that's not a bad thing necessarily, but one of the things that we miss when we do that is we don't evaluate learning factors. So if you think about going from, you know, learning to results, you know, there's a sort of, there's a whole series of things that have to happen.

So first of all, you got to have good content. If you got bad content, then you're teaching them bad stuff and that's not good. So you have to have good content. The learners have to pay attention to it, but they don't have to just pay attention to it. They have to pay attention to it in a way that will help them learn.

Then they have to comprehend it clearly. They've got to get the concepts clear in their head, they have to have the right mental models from it. And then, you hope after that, that they are motivated to apply what they've learned, that they've given practice in decision making, that they have some supports for remembering.

Because if you comprehend it, but you don't remember it next week, that's no good. And then some supports for action, you know, are people resilient when they hit obstacles, are they prepared for those? And so that's the learning side. And then you get to the work situation side and are there supports for people there when they're trying to apply it?

Are there, has the learning been designed to trigger certain thoughts and actions? And then they put it into practice. Is it working? Are they successful? And then are they getting the results? So you can see this. Long causal pathway. If you only measure at the upper levels of this, if you only measure like the results or the behavior, then you don't know about where the breakdowns might have been.

You don't know what's working, so I think we become too blind if we think, Oh, we just need to prove the value. We just need to look at performance and results because then we're missing the things that we as learning and development professionals The things we have leverage over, things we can control.

So that's the other big thing I see as a blind spot. 

Tom Griffiths: Yeah. Yeah, so instructive. Like you say, there's a. multi step causal chain from learning experience through to business impact. And if you're trying to short circuit that too much, you can lose the opportunity to see where things are breaking down.

You can lose the validity of the causal attribution of the learning event to the business outcome where there may have been other conflated factors. So it's important to look at all of those steps. I think that's great. What is your latest thinking on Our ability is learning professionals to get learning mapped all the way to dollars because we're living in a period where many businesses are tightening budgets and being more bringing more scrutiny to training and development budgets in particular.

And CFOs are demanding, I need to see the ROI of everything. And the more you can do that to dollars, the easier that conversation is. What's your view on how possible that is in, and how perhaps that varies by the different types of learning that we can do? 

Will Thalheimer: Well, it's definitely possible. And, you know, Jack and Patty Phillips have been at the forefront of this and they do really good work, but it's not always easy.

There's some there's some folks out there that. Now, ideally, what we'd like to have is real numbers, right? The real business numbers we'd like to be able to track. I've had this experience many times people come to me and say, Hey, Will, can you help us with this learning design or learning evaluation we're thinking of doing a, we want to prove the value.

Can you help us with that? I say, sure. And then as we get talking about it and I talk about, well, okay, we're going to have to decide, you know, to do, to make sure that it was the training that made a difference and not the economy or a new management or a new product or whatever. We're going to have to use a randomized controlled trial, or pre test to post test, or time series analysis.

And we're going to have some learners that take the learning and some that don't. And you start talking about the logistics and the costs, and then people say, Oh no, we don't really want that. So, there are some very deep complications. And costs when you really want to do an ROI study. And one of the things we ought to think about is the ROI of our ROI evaluation initiatives.

Definitely. It's all doable. We just want to prioritize. I'll give you an example. I was working with a client and we'd gone through several steps in the process. And at the very last meeting before we started developing the learning evaluation metrics, they invited a new person into the meeting. And the whole goal of the project had been to to record the KPIs, look at the KPIs, and that kind of thing.

And this person must have been a high senior person in the organization, because this person just said to all the people in the meeting who had already been convinced, And who basically came to me wanting to do an ROI KPI study. This person came in and said, you know, I just don't believe that you can really separate out the training impact from all the other factors that affect our KPIs.

And everybody said, Oh yeah. Okay. So let's not do that. What do we do now? So we had to come up with something different because they, at the end of the day, some stakeholder, senior stakeholder in that organization. Convince them after we've done all this work, that maybe KPIs are not the things we were looking for.

We were able to keep a little bit on the KPIs, but we had to pivot to some other things as well. What I call KBIs, Key Behavioral Indicators. 

Tom Griffiths: Which is an interesting, useful analog. And I think. It serves the purpose of being more tightly coupled to the training that was delivered. Could you say a little bit about KBIs?

Will Thalheimer: So, so, you know, so key performance indicators typically get translated into things like revenues, costs safety incidents, that kind of thing. And those are very valuable to measure. Oftentimes you can, I was working with a digital marketing team once. And I was able to go online and get a list of all the KPIs using digital marketing.

They were using, you know, a few of them, but there was, you know, because we got these lists, they could brainstorm other ones that they might want to use. So in every field, there's KPIs that are out there. KBIs are things that are more behavioral, as the word says. And the things that you can sort of, they're usually not gathered by the organization, but they can be.

So for another project I was working on, this is in the mining industry, we developed KBIs around the safe, safety is really important in mines. And so we had some specific KBIs and we had about 10 of them. And then we narrowed it down to five key ones that we really wanted to focus on in the evaluation, but things that you can sort of wrap your heads around that, you know, that this is the kind of behavior that we're looking for.

Tom Griffiths: Yeah, that's great. And it's observable. And it's. And so you can see the impact more directly of someone who wasn't doing the behavior before the training experience and then is afterwards and how widespread that is. That's great. I know another innovation that you've brought to the space is the learning transfer, excuse me, Learning Transfer Evaluation Model or LTEM as an alternative really to the Kirkpatrick model that people may be familiar with.

I'd love to know what led you to want to create that more sophisticated model, and if you could talk us through it a little bit. 

Will Thalheimer: Just in case your listeners aren't familiar with the Kirkpatrick four level model, and I now call it the Kirkpatrick Cassell four level model, because it turns out that Raymond Cassell actually came up with the four level idea and then Donald Kirkpatrick put labels on it and popularize it.

So I feel like both men deserve credit. But the four level model says, Hey, we're going to measure at level one, the learner's reactions. Level two is learning. Level three is behavior and level four is results. And that model was born in the 1950s. Came out in some articles in the 1950s, 1960s, and it's been used ever since.

And it's probably still the most common model that's out there. So. You know, I've been in the field a long time and, you know, I'd seen complaints about the model over the years, but also I noticed that we had we as learning and development professionals had a number of frustrations. We weren't able to measure what we wanted to measure.

And I got involved in doing some surveying of our industry first with the E Learning Guild. Back in 2007, I think we started out and then I did some on my own to work learning research and then some even more with tier one and their learning trends report and overall, the most recent numbers are 65 percent of us as learning and development professionals are frustrated with the learning evaluation that we're able to do and so there was clearly chinks in the armor and then I saw some research reviews of the industry, the training industry that said, Hey, The Kirkpatrick model ignores 40 years of research on human learning, leads to a checklist approach, some really damning kind of critiques from researchers.

So I thought, well, it's been around a while between the time it was built. And now we've come through the cognitive revolution in learning psychology and psychology. A lot has changed. Maybe we could create a better model. So I set out to do that around 2016 or so. And I created like 11 different iterations of it.

And I asked a lot of smart people, learning evaluation experts like Rob Brinkerhoff, Ingrid Guerra-Lopez, and learning experts like Julie Dirksen and Clark Quinn, a bunch of really smart practitioners. And they said, no, Will, that's no good. And so I improved it and then published it in 2018. It's about five and a half years old now.

And what LTEM does, and my philosophy about models. is that they should nudge us to do good things, to think good thoughts, and to think appropriately about the thing that they're modeling, and to push people to do good things, but also to push people away from doing things that aren't so productive.

So, LTEM has eight tiers. The first tier is attendance, the second tier is learner activity, and LTEM basically says it's fine to measure those things, but don't think that you can validate learning based on those things, because people could attend. They could complete a course. They could be active participants, but they may learn the wrong thing.

They may not learn. They may learn and forget. So we can't really validate our learning based on that. Tier 3 is about learner perceptions, and that's about learner surveys. But there's other ways to get perceptions as well. You could use focus groups, et cetera. Tier 4 is knowledge. And Tier 4, 5, and 6 in LTEM really are measuring the learning itself.

So Tier 4 is knowledge, Tier 5 is decision making competence, and Tier 6 is task competence. So the thing that LTEM does that the four level model does not do is it brings some learning wisdom back into our discussions of learning evaluation. You know, if you put learning all in one bucket, you could be talking about the regurgitation of trivia, or the recall or recognition of meaningless information, or meaningful information, or decision making competence, or task competence, or skills.

And so that's a big disparity between trivia and skills, right? But when we have it all in one bucket, like the four level model does, Then we tend to forget and we say, Oh, we need a level two. Oh, let's use a knowledge check. And we know that knowledge checks aren't good enough because people can have the knowledge, but they may not know what to do, and they may not remember what to do, etc.

So, LTEM has tiers 4, again, are knowledge, decision making competence, task competence. And then we get into tier 7, which is transfer, that's behavior change. And tier 8 is the results. There's one sort of subtlety about Tier 8 that's different than the four level model. It reminds us that the organization is one of our stakeholders, but we also have potential stakeholders in our learners.

What effect does it have on them? Also a potential stakeholder is the learner's coworkers, their family, their friends, and the society, the community, the environments. It doesn't, LTEM doesn't say you need to measure all these things, but it reminds us that, hey, there are other stakeholders besides.

So I created LTEM as a way to bring some learning wisdom back into our discussions of learning evaluation. It's a long answer. 

Tom Griffiths: No, thank you. It's a really sound framework that we've drawn a lot of use and inspiration from. And I love how, as you say, it brings more learning science or research into a simple to use tool when it comes to evaluation.

Is your recommendation that people try to design their learning measurement approaches to tackle every level? How do you counsel people to think about that? 

Will Thalheimer: Ah, that's a, that is a really important question. No. Absolutely not. There's hardly any training program that you should measure at all eight tiers on LTEM.

And the reason is that evaluating takes time, money, costs. It's a, it's an investment. And like any investment, you want to use it wisely. LTEM does not suggest you do, you measure everything. But what it does do is it provides a map of things you can measure. And then you can brainstorm to think about, well, we could measure this at this tier.

We could measure this at this tier. And then you create this brainstorm and then you say, no, let's, what are the most important things we want to measure? That's really the vision that I had for LTEM in the beginning. One of the things that I didn't think about, but Elham Arabi did her doctoral dissertation on LTEM.

And she had this hypothesis, which I thought was really interesting. She said, listen, if I introduce LTEM to a learning team, two things will happen. One is they'll be inclined to use better learning measurement. And number two, they will be inspired to use better learning designs. And that second part is really interesting.

Why would that be? Well, it kind of makes sense, right? So if you decide we're going to, we're going to measure how well our learners are at being able to make realistic decisions. So the learning team is going to look at that and go, Oh, well, if we're going to be measured on that, we better build a lot of practice on realistic practice into our learning design.

And that's exactly what she found. She rolled this out in a hospital, a nurse professional development. And LTEM had these two impacts. It encouraged better learning evaluation and better learning design. 

Tom Griffiths: That's so impressive. And I think, yeah, as you say, when you take a look at the more, I'd say perhaps granular or specific levels of LTEM versus Kirkpatrick, you do want to have those things in mind as you're designing, just curious for that example, or others where you've seen it drive good behavior on learning design. Was it the task competence level that kind of jumps out as one that perhaps leads you to say, okay, I need to go beyond just like a quiz here where they know what they have to do, but I need to make sure I see them doing it or there are other levels that really stand out as positive drivers of good learning design.

Will Thalheimer: Well, it everyone else and discourages you from thinking that you can get away by measuring butts and seats, right? Yep. And the number of posts that people make in a discussion board or something. So it discourages it. You know, it's color coded. So those are tier one and tier two are red. So danger.

And so that's one thing I think it's like you said, I think it is tier six task competence, but also tier five decision making competence. Both of those things are really sort of realistic, right? They're important. We want people to be able to do those things. We design courses to help people perform differently.

Those things are clear measures of performance. So I think those are the things that really push people to see things differently. And by the way, you can use LTEM. People are using it in all different types of ways. So some people are using it to have conversations with their clients. Hey, you've taught, you want to talk to us about helping you build a learning program?

Let's look at LTEM first. We'll think about how we might evaluate this. What would you, what would count as evidence for you? People look at LTEM and they can brainstorm the things. And so it helps people clarify what their real expectations are. So there's lots of ways that people have been using Altem to help them do the learning and development.

Things that we need to do. 

Tom Griffiths: Yeah, absolutely. And that's something that we've experienced as well. To an earlier point you made, it helps to educate at the same time the stakeholder in what robust learning measurement can look like. And as you say, at the start of the project or engagement, you can agree what evidence would count at different levels and use that as a shared objective.

That's a really great use case. I've heard other folks. Using it to help A B test learning solutions. I would love to know how you recommend people do the A B testing element as it relates to LTEM. 

Will Thalheimer: So I'm a big believer in A B testing. In fact, there's a chapter in my new book on that. For those that don't know, A B testing is basically, well, there's a bunch of different ways to do it, but you're basically comparing two groups or two products or two versions.

And so I suppose, and I'd love to hear what. You know, how you're thinking about this, you know, you could use LTEM by sort of brainstorming. Well, what would we want to measure? I'm a big believer in tier five. So just measuring decision making competence, as I mentioned in our first episode, I got into this field and I did a lot of work with simulations and we use a lot of scenario questions to build those simulations out.

And those are really powerful. If you look at one, a scenario question, you look at one, you go, Oh yeah that's easy to create one of those. While they're not so easy to create as you think, but they're still relatively good bang for the buck. So let's just say you want to do A B testing and you're going to create these different scenarios.

And you'll tie the scenarios to your learning objectives, and maybe you'll have two questions on each learning objective or something like that. One of the things that I do in developing scenario questions and use for evaluation is I create an initial version for all the learning objectives I'm getting across.

So let's say I have 10 learning objectives, I would create 10 questions. And I would validate those with experts actually having them answer the question. And if they got the same right answer that I think is the right answer, we keep the question. If not, we either change the question or throw it out.

And then we create a clone question. So, you know, so in the one question, Sally's in the finance department. And so if we cloned it, we might have Joe in the marketing department. You change the background context, but not the meaningful stuff, just the. The sort of superficial stuff. And then what I would do is randomly assign the questions to a pre test or post test.

Okay, so that's just some of the mechanics of that. But in terms of an A B test, you might then, you've got your questions, they're validated, you've got a bunch of them. And so you might have two versions of your program. So instead of your learning team fighting about which version is better, you say, okay, well, why don't we create two versions and see which one works best.

Oh, great idea, then we don't have to fight. Yeah, we don't have to waste our time fighting. We don't have to let the status of the individuals decide what's right or wrong. We'll actually see what works better. And so then we would use our scenario questions at the LTEM 5 level, and we would see which program did better in creating learning as assessed by those questions.

Tom Griffiths: Yeah, that's great. I mean, it's kind of like a controlled trial in, you know, drug discovery to understand, you know, did this actually have an effect, or like you say, you can use it to optimize between two. design approaches. I think that's really cool. And if you're tracking it on multiple levels of the LTEM, then, you know, you can see perhaps where the differences lie and debug from there.

So I love that approach. Thanks for sharing. We're just about out of time here, Will, but we ask everyone this quickfire round at the end. When you think about giving advice to folks in the industry, a listener can often be. a learning leader just starting out in their career or perhaps a little bit further along, what advice would you give them to start doing something?

Will Thalheimer: Start doing something. So number one, use more research inspired practices. And number two, leverage the performance sciences, not just the learning sciences, but there are a lot of performance sciences that are bubbling up like habit science, nudging science. Performance triggering, network science, a bunch of those leverage learning in the workflow.

Let's do that in a way that works. Help managers be better stewards of learning. They are the organizational glue. They can really, they're sort of our force multipliers in our organizations. If we in learning can help our managers become better stewards of learning, that's great. Let's upskill ourselves as the learning team.

We're sort of like the cobbler's kids with the lack of good shoes. We get the least amount of learning in the organization sometimes, but let's upskill ourselves. There's a lot to learn. We should also start getting more outside objective feedback on our learning practices. You know. We go to the doctor every year to get checked out.

We bring in our car every year to get checked out. It's good to get checked out. And then finally, let's use better learning evaluation methods. If we do that, we can create virtuous cycles of continuous improvement. 

Tom Griffiths: Love it. Fantastic. Some good things to start there in our stop, start, continue rapid fire.

What about stop? What should people stop doing? 

Will Thalheimer: This may be controversial, but I think we should stop pandering to the C suite with dubious business-y sounding metrics. We have some serious evaluation approaches we can use. We don't need to be pretending to be business-y with, you know, certain business-y sounding metrics.

We don't need to do that. We should stop focusing on butts in seats and happy learners as our primary metrics. Finally, we should stop ourselves from making common learning design mistakes and using some of the myths. 

Tom Griffiths: Hear, hear! Agree with all of that. And thanks for sharing a bunch on how people can do better in our conversation so far.

Finally, what are people doing that they should keep doing and continue? 

Will Thalheimer: We should still be experimenting with technology. There's lots of new learning technologies bubbling up, including AI, generative AI. We should be experimenting with those. We should be improving how we do online learning. There's new tools out there, but also sort of we're in the pioneer phase.

I can see myself getting better at it and everybody else as well, so we should continue doing that. We should continue to utilize research inspired learning designs. We've been doing more and more of that. We should continue with that. And finally, we should continue building a strong team of learning professionals, helping each other, the new folks, we should educate them, bring them along.

There's a, you know, the learning profession is filled with a lot of dedicated, passionate people. We should continue supporting ourselves and getting better and better. 

Tom Griffiths: Right on. I agree with that. Well, thanks so much. You've given us a ton of wisdom over this conversation. You bring so much to the field.

It's an honor to be able to spend some time together, drilling into your expertise and somehow feel like we've only touched the tip of the iceberg in some of these topics. So we encourage everyone to go and check out the various ways you've put your wisdom out into the world over the years with the books and the website and perhaps, you know, connecting for conversations as well.

So, A big thanks for, from everyone here at the Hone team for your time, really enjoyed it. Thanks so much. 

Will Thalheimer: Well, thanks for inviting me, Tom. And thanks to the folks back at Hone. 

Tom Griffiths: Thanks for listening to Learning Works. If you've enjoyed today's conversation, we encourage you to subscribe to the podcast for our exciting lineup of future episodes.

Learning Works is presented by Hone. Hone helps busy L& D leaders easily scale power skills training through tech powered live cohort learning experiences. The drive real ROI and lasting behavior change. If you want even more resources, you can head to our website, honehq. com. That's H O N E H Q dot com for upcoming workshops, articles, and to learn more about Hone.