Hone

Demystifying Learning Science & Debunking Learning Myths with Will Thalheimer, Part 1

What's covered

In this episode, Tom Griffiths is joined by Dr. Will Thalheimer, a world-renowned L&D practitioner, researcher, and author with nearly four decades of expertise. In part one of their discussion, Tom and Will dissect the gap between learning and research, debunk common learning myths, and compare live instructor-led training with async e-learning. They also offer practical advice for L&D leaders striving to implement research-based strategies and explore the role of AI in shaping the future of L&D. Join them for a thought-provoking discussion where Will pulls insights from his groundbreaking work, including the creation of LTEM (Learning-Transfer Evaluation Model) and his award-winning books, "Performance-Focused Learner Surveys" and the forthcoming "CEO’s Guide to Training, Learning, and Work."

About the speakers

Podcast_Ep10-Headshot_240220

Dr. Will Thalheimer

World-renowned L&D practitioner, researcher, and author

Will Thalheimer is a learning expert, researcher, instructional designer, business strategist, speaker, and author. He has worked in the learning and performance field since 1985. In 1998, Will founded Work-Learning Research to bridge the gap between research and practice, compile research on learning, and disseminate research findings to help chief learning officers, instructional designers, trainers, e-learning developers, performance consultants, and learning executives build more effective learning and performance interventions and environments. He speaks regularly at national and international conferences. Will holds a BA from the Pennsylvania State University, an MBA from Drexel University, and a PhD in educational psychology: human learning and cognition from Columbia University.

TomGriffiths-1-300x300-1-1

Tom Griffiths

CEO and co-founder, Hone

Tom is the co-founder and CEO of Hone, a next-generation live learning platform for management and people-skills. Prior to Hone, Tom was co-founder and Chief Product Officer of gaming unicorn FanDuel, where over a decade he helped create a multi-award winning product and a thriving distributed team. He has had lifelong passions for education, technology, and business and is grateful for the opportunity to combine all three at Hone. Tom lives in San Diego with his wife and two young children.

Tom regularly speaks and writes about leadership development, management training, and the future of work.

Episode transcript

Tom Griffiths: This is Learning Works, a podcast presented by Hone. It's a series of in depth conversations with L& D experts, HR leaders, and executives on how they've built game changing learning and development strategies, unleashed business growth through talent development, and scaled their global L& D teams. Tune in for the wisdom and actionable insights, the best in the industry.

I'm Tom Griffiths, CEO of Hone. Welcome to Learning Works.

Hello, everyone. Welcome to Learning Works. Today, our guest is Dr. Will Thalheimer, a world renowned L& D practitioner, researcher, author, and thought leader. For most of us in the L& D community, he needs no introduction, but a little bit about Will. If you haven't heard of him, Will has a PhD in educational psychology from Columbia.

And in his almost 40 years of experience in L& D, he's made some incredible contributions to our understanding of how people learn and take that learning for performance at work, for which he's been recognized with the coveted Guild Master Award presented by the Learning Guild. He is the creator of the LTEM, Learning Transfer Evaluation Model, something that's really inspired our work here at Hone, and is the author of a few books, including the award winning, Performance Focused Learner Surveys, as well as the forthcoming CEO's Guide to Training, Learning, and Work.

Will, thank you so much for joining us today. 

Will Thalheimer: It's my pleasure to be here. 

Tom Griffiths: Really appreciate the time. As you know, we've been a fan of your work for many years here at Hone. So I'm sure it'll be a great conversation. I really appreciate how you really bridge the research world and bring that into actionable methodologies for us in the corporate learning space.

With many. Aspects of learning research can often focus on kind of early childhood and, you know, young adult education, but you've really specialized in the corporate space. Curious what first drew you into the world of learning and development in a business setting? 

Will Thalheimer: Well, it probably goes back to my days as a MBA student, master's in business administration at Drexel University in Philadelphia.

And I was taking business courses. I thought, you know, business degree would be good to get, you know, you could use it for a lot of things. But as I was taking those courses, they were good. They were useful, but I wasn't really inspired and So I start looking around, and I found at Drexel a four course sequence on instructional design.

And I started taking those courses, and I go, this is it. This is what I want to do. And so as I was getting ready to graduate, I started looking around. Actually, a funny story. I had done my master's project on building a simulation, and so I wanted to, like, use simulations to help learning. And so I was paging through Training Magazine, and there it was on the page, an advertisement for the world leaders in business simulation.

And I said, Oh, I want to work for them. Where are they located? It was like four blocks from my house. So the very next day I got on my one suit, I went down there and I knocked on their door and I said, I want to work for you. And the rest is history. I started as an instructional designer there. Yeah.

That's how I got started. That's 

Tom Griffiths: fantastic. Yeah. I mean, simulation is such a compelling way to learn. What were the kind of simulations you were building in those days? 

Will Thalheimer: Well, we, uh, as a company, we focused on a finance and strategy, but I got involved very quickly on building leadership simulations. We had a leadership guy, Dennis Cohen, and we had an idea that let's build a simulation to teach management, to teach leadership skills.

And so we had to figure this out all on our own. This was a long time ago, back in the mid eighties. But great, I ended up being the project manager on that, and lots of stories there. Some harrowing moments, because we were creating something from scratch. Yeah. But it worked out, and it was really, really lots of fun.

Tom Griffiths: Yeah, no, that's great. And it's awesome that the career launched there. And I know we're going to talk a lot about the research that you've been able to harness for learning design in a corporate environment and much of your work in the measurement space. Be curious if you can bring us up to the moment and tell us what are you working on right now and what is it that's exciting to you about that?

Will Thalheimer: I'm rebooting a work learning research, which is my consulting practice. Yeah. And you know, that's always fun to restart, to be in startup mode as an entrepreneur, the excitement and the terror at the same time. Also working on what I call the L TEM bootcamp. L TEM is a learning evaluation model. And a bootcamp bootcamp is a workshop on it.

It's a three week program starting next week. So I'm very excited about that. Very focused on that. the other thing that we're working on right now, which I'm very excited about is my new book, the CEO's guide to training e learning and work, empowering learning for a competitive advantage, that's going to be coming out in a couple of months.

And there's a bunch of little things to do on that, but very excited that it's. Finally done after about five years. 

Tom Griffiths: That's great. Yeah, no, we're really looking forward to that as well. I find that so many of the conversations that we have, both with customers and with practitioners here in the space are about how to get leverage with the C suite and, you know, in particular CEO.

And I think opening. Leadership eyes to the necessities and best practices around learning is a real difference maker for the space. So excited for that. We, you know, we've planned our conversation to, to go elsewhere today, but wondered if there was any kind of top level headlines or principles you wanted to call out from, from the new book, or would we, you want to keep us in eager anticipation?

Well. 

Will Thalheimer: In essence, the book is, it's got really short chapters, and then there's a lot of chapter notes, you know, with the research backing behind it. I wanted to keep it really sleek and so people could get through it, read it, really, you know, resonate with it. But the basic premise of the book is to, how do we empower ourselves in the learning space?

We do good work, we do important work, but we know that we could be doing more if there weren't some roadblocks. Um, part of the roadblocks are thrown up because, uh, senior management doesn't really understand what we do. It doesn't understand our leverage points and some of the roadblocks we, we build ourselves, frankly.

And so I'm hoping with this book to not only to communicate with, uh, CEOs and C suite folks, but also with the learning folks as well. 

Tom Griffiths: Yeah, yeah, that's great. The more that we can do to build a shared understanding between those two parts of the organization, the more successful learning initiatives can be, and the more budget they can get, and the more richness we can bring to people's development.

So I think that's a really admirable mission for the book. Looking forward to it. 

Will Thalheimer: Thanks. Well, yeah, appreciate that. You've 

Tom Griffiths: also described your mission is bridging the gap between learning and research. And so be curious on your take. How wide is that gap today? And how has it changed over your career?

Will Thalheimer: Yeah, when I started work learning research the first time back in 1998, the gap was huge. In fact, that's why I got started. I said, Oh, somebody needs to be in there bridging the gap. We've got all these smart learning professionals. We've got all these learning researchers. They're never talking to each other.

Somebody needs to get in there and sort of translate through that. So, uh, over the years, in the beginning, it was a little frustrating, didn't see much progress, but over the last 5, 5 or 10 years, I see a lot of progress. People are Number one, they're avoiding some learning myths that are out there, like learning styles, etc.

And also, they're doing a lot more with things, you know, sort of, sort of fundamental learning factors, like retrieval practice and the spacing effect, etc. So, I think we're doing pretty well as a field. We're getting more and more. Now, it's not, not everybody's doing this. One of the, one of the things about the learning space is that we have new People arriving every year and unfortunately there's not always we don't educate them properly or enough And so there tends to be some gaps But for the most part those of us who are in leadership positions are doing more and more with a research based best practices So I've been very pleased over the last five years.

It's great. 

Tom Griffiths: Yeah, I totally agree and I think You can take a lot of credit for disseminating a lot of scientific thinking in the space, debunking myths, which we can talk about, and not just kind of criticizing the work that is being done, but actually providing real actionable methodologies and solutions.

that are grounded in research in a way that perhaps more historical or intuitive approaches are not. So, you know, I know we're, there's a field appreciative of that. When I came to the field six or seven years ago, I saw the same thing in the sense that it felt like You know, there's this big gap in skills, certainly in management and leadership that needed to be filled.

And the good news was we knew what those skills were in terms of, you know, frameworks for doing certain things like coaching or difficult conversations. And we knew that, you know, these are the best practices in adult learning from the research, but it did feel like they hadn't quite synthesized together in a, You know, in a productized way, and so my background as a product person, that was part of the contribution we've been trying to make over the years here at Hone is just bringing together those best practices in a productized solution so that people don't have to, you know, look at the research themselves or build their own evaluation tools, you know, it's all built in.

And yeah, I've appreciated your time. your collaboration on all of that. Just curious, what are some of the standout research findings that you feel are underappreciated or underutilized in the field today 

Will Thalheimer: still? You know, one of the, one of the biggest flaws that is still prevalent, it's getting better.

One of the biggest flaws in our training efforts is that we tend to teach too much. We tend to broadcast, and it's all about, you know, content, content, you know, broadcasting this information, and when you do that, you lose your learners. Number one, they sort of get overwhelmed with it, but you're not. When we do that, we're not really reinforcing.

We're not giving them practice, we're not giving them support and thinking about how to use it in the real world. We're not giving them kind of the sort of cognitive supports that are going to help them remember. So, you know, one of the, one of the things that we can still do more of is support people in remembering.

And the practical things we can do there is to give them realistic practice. Think about what their real tasks are, what their real decisions are in their work, and give them a lot of practice on those things. And if we can space that over time, That's even better spacing repetitions of content over time and not rote repetitions because those don't work that well, but you know, interesting examples, different situations.

If you get people to be able to, you know, practice those or think about those spaced over time, that's much more effective. So those are some of the things that are really sort of the gaps. And so some of the things that we can do to fix them. 

Tom Griffiths: Yeah, absolutely. I mean, it's so crystal clear from the research that that not just space repetition, as you say, but space usage or space recall, um, has this dramatic effect because the brain is learning what knowledge is useful.

And if you remind it that this particular piece of knowledge or technique or skill is useful, then it, you know, it will retain it because it's being used and doing that over a period of time and space, like you say, it's just. Proven time and time again to scientifically be, you know, improvement. I know one of your fun activities is debunking learning myths.

We see a lot of that online as a service to the field. What are some of the biggest learning myths that have been debunked by research? 

Will Thalheimer: Well, you know, I mean, the I have I have I am known as one of the debunkers. I started a thing called the debunker club a number of years ago, but I actually am finding I don't need to do any debunking anymore because it's really picked up.

The whole field is doing it now. So people put out the, um, something about learning styles on LinkedIn. There's a whole bunch of people that helpfully say, Hey, you know, you should know that the Scientists have looked at this and it really doesn't work that well. So learning styles is one of those. 

Tom Griffiths: Uh, and that's being, I'm a visual learner.

I'm an auditory learner. Um, but there's, yeah, those 

Will Thalheimer: are the most common way people think about it is that there's many different, um, researchers have actually looked at like 60 different types of learning styles, frameworks and found none of them work that well. And so, yeah, but that, but that's the, the idea is that if we know.

Somebody's learning style, then if we design our learning to meet that learning style that they'll do better. So if they're a visual learner and we give things visually, they'll do better. If they're an auditory learner and we give them, give it to them with audio, they'll learn better. That, that doesn't work.

It doesn't work that well. And instead of doing that, we should be using, and a lot of people, when they, if they sort of embrace the learning styles notion, when they do their learning designs, they're spending a lot of time and energy developing exercises for that, they would be better off Spending them, providing space, practice, retrieval, good feedback, et cetera.

Tom Griffiths: Yeah. Make sense. And so if someone was to say, well, different people learn in different ways. What validity is there to that? Even if it's not kind of as delineated as a learning 

Will Thalheimer: style. You usually, what we're trying to do is to match the medium with the thing we're learning. Right. So if we're learning about music theory, then audio is probably a great thing.

If we're teaching a surgeon how to, you know, cut through skin or something like that, then we want it to be both visual and kinesthetic, right? Those are the most important things. So, we really want to match the needs of the, the learning. Or the performance with a learning, uh, yes, learners have preferences, but you know, some, you know, there's a ton of research and I, I actually have a chapter on this in the new book.

There's a ton of research to show that learners don't have good intuitions about learning or don't always have good intuitions about learning. And so if we sort of rely on what they think they need, then we're probably going to be designing learning that's not as effective as it could be. Mm hmm. Mm 

Tom Griffiths: hmm.

Yep. No, that makes sense. And I guess, you know, there's the whole personalization angle, but often that can be more to do with their state of knowledge or level of proficiency. 

Will Thalheimer: You're absolutely right. You know, we should personalize based on where, where they are cognitively, what they know, uh, what their motivations are, et cetera.

Um, but that's a different thing than learning styles. . 

Tom Griffiths: Right, right. Yep. No, that makes total sense. So match the modality to the task and if possible, match the tuition or the, the point in which you're starting the learning experience to the level of knowledge or proficiency of the the learner. Another, I think, myth that you have debunked, if my memory serves me correctly, is the 70 20 10 rule, which often gets thrown at me when they say, well, if you provide formal training, that's actually only 10 percent of what people learn.

And so I can get 90 percent without doing any of that. And so we'd love to hear your latest thinking on 70 20 10. 

Will Thalheimer: So, well, the folks that gave us 702010, not the original researchers, but those who have been sort of out there Advocating for it even they don't even they have stepped back from it, but the the the message they were trying to send and I've had a lot of conversations with Yossaret's on this.

He's a really good guy from the Netherlands, and he was one of the ones that was talking about 70 2010, but the, what they were advocating for was, hey, formal training isn't the only way that people learn. Which is obvious, right? And so, their argument is that we should try to leverage, as much as we can, other ways of learning.

Particularly learning in the workflow. Right? Now, oftentimes, we sort of, um, Oversimplify when we try to do that. So we say, Oh, learning in the workflow. Oh, we're going to have a brown bag lunch, or we're going to have communities of practice. And you know, some of that's good, but that's not really as much as we could be doing by learning in the workflow.

So for example, if we want people to be more creative. Well, we need to think about how to support them in being creative. People tend to think that they should brainstorm as a group, and the research shows that you should brainstorm individually, and that's better. If you want to, you know, get more ideas from your team as a leader, you might, at the end of every project, at the end of every meeting, say, Hey.

What did we do well? What did we not do well? And what do we want to change for next time? And that creates a way of learning for a group. It also sort of opens it up and makes it not about hierarchy and status, but it makes it about, Oh, what are we trying to do? What are we trying to learn? Everybody here can have good ideas.

And by doing these kinds of things, this is where we really learn powerfully in the flow of work. So the 70 20 10 was based on really limited research and the number, you know, they found that the numbers should change, etc., depending on what you're looking at. But the idea that we should try to go beyond formal training, um, is a good one.

Not that we should get rid of formal training, but that we should be able to augment it with other ways of learning as well. Right, 

Tom Griffiths: right. Which, which is a qualitative version of the point makes total sense. I've just seen it used sometimes to say, well, we don't actually need formal training because so much learning can happen on the job or as part of, you know, mentor mentee relationship.

What would you say to that idea of kind of eliminating formal training? 

Will Thalheimer: It's a dumb idea. I mean, it's, it's done for a bunch of reasons. Number one, sometimes if you're going to do this workflow learning. You need to be trained on how to do it, right? You know, some of it's, uh, intricate or complicated. Or you might need to be motivated about it, you know, to say, Hey, so that's one thing.

You know, the other thing is sometimes to learn, we really do need to be away from distractions. Right. We need to have time away. We need to have time with other people to think about things. We need to have experts there that we can ask questions of. There's a lot of really good reasons. Why? I mean, you just look, just look at the success of MOOCs, right?

You know, courses you take online with a whole bunch of other people in a self study format. Well, not a lot of people finish those, and they're probably, you know, their motivation lags, they're distracted, they're not bonding with others. They don't feel responsibility to, you know, an instructor or to other learners.

So, you know, clearly that doesn't work that well. So people don't always learn that well on their own. 

Tom Griffiths: Yeah, exactly. You know, you can obviously see examples. If I'm coding and I encounter a new library or something that I need to learn about in that moment, I can do that as I'm going along. But there are other, you know, Computer science concepts that I need foundationally to be able to be an effective engineer.

And it's good to learn that in a formal training setting. Sometimes I think of pilots where, you know, you wouldn't want the pilot flying a plane to have to look up the blinking red light in, you know, 200 page page manual on the fly to figure out what happens. Like, you know, there's been some formal training there, thankfully, as well as ongoing development.

So you need both. Yeah. That 

Will Thalheimer: makes sense. Yeah. And you know, that's a, that's a crazy idea. Oh, you don't have to learn anything. You can just look it up when you need it. You know, the sort of Google idea or YouTube idea. Well, that's nuts. Right. I mean, Salesperson on a sales call, right? They can't just stop in the middle of it and say, Wait a minute.

I need to go check my I need to go watch a YouTube video. You know how to deal with this injection. Yeah. Yeah. You know, a lot of us working in groups, right? So there's no time to sort of stop and do things. And oftentimes we have to be fluent. In our responses, so, you know, that idea that you can look things up is not, not a good one.

And, you know, half the Internet is filled with bad information. You have to have at least some area, some knowledge so that you know what you're looking at is good or not. So, and the other thing is, people learn better when they know something already. Yeah. I'll give you an example from my consulting practice.

If I'm teaching about learning sciences, how to build better learning interventions, or I'm teaching about learning evaluation, I used to just go out and do a learning audit, and I would dump the recommendations on people, or do a learning evaluation study and just tell them what I found. But I learned over the years the hard way that that doesn't work that well.

If I have a team together and I'm talking about something, learning sciences or learning evaluation, and I talk, you know, and I share a recommendation, you know, half the, you know, half the people resonate with it and the other people are, why are you, why are you recommending that? That's a crazy idea. Then you get in this big discussion.

We in the learning field come from so many different backgrounds and perspectives. We're not always on the same page. So I always. require like an evaluation background, or if I'm talking about evaluation or a learning science background, or before we talk about learning, because you have to have that fundamental knowledge before you can think about things clearly before you can make decisions.

Yeah, 

Tom Griffiths: so in a way you qualify to be in the conversation in some sense. Makes sense. And so if we talk to modalities a little bit and I guess this plays in somewhat to the learning in the flow of work versus formal training, but I'm curious what does the research say about the big debate between live instructor led training?

And on demand e learning. 

Will Thalheimer: So, I, I looked into this a number of years ago, because it's a question that comes up over and over. Yeah. And what I found by looking at the research is that If you use the same learning method in a different modality, then there's not going to be a difference. So in other words, if you use an animation online and an animation in the classroom, there's going to be no difference in the amount of learning.

If you show a video. online and a video in the classroom, there's going to be no difference. So, and, and this was found by Richard E. Clark back in 1983, did a lot of, you know, the famous studies on this saying that it's not the modality that makes a difference. It's the, the learning method that you're using.

Now, that being said, sometimes the, the technology allows you to do different things, right? So if, if I'm, if I'm online and I can get people to go through a simulation, that's going to be better than if I'm in a classroom where people don't have access to the IT system I'm trying to do. So yeah, there's certainly, certainly, uh, some value in different things, but here's the other thing.

If you just. Say, build the best e learning course you can. Build the best classroom training course you can. What the research shows is that e learning actually tends to outperform, uh, classroom training. And the reason for that is because those who are designing e learning know that they can't just broadcast information.

That they have to have some interaction. They have to give some practice. Um, they have to make it interesting. Both can be better designed, both e learning and classroom training. But if you just leave it out in the wild there, then e learning tends to outperform classroom training. The other thing I think that we should be cognizant of is that we're in an amazing age of innovation in e learning.

Right. And when I say e learning, I'm talking about all online learning. I was talking to somebody from Australia and I was using the word e learning. And, Will, e learning here in Australia just means page turners. Oh no, that's not what I mean. But we're in a, you know, we're just, you know, we had COVID and nobody thought we could actually teach online.

And then we all taught online and it worked out pretty well, except for kids in school. And there's new systems all the time. There's new learning programs. We're going, we're getting better and better at it. And I think we'll continue to do that. Yeah, 

Tom Griffiths: I agree. And it is a bit of a fuzzy definition. So, as I was asking the question, e learning in my mind was, you know, the pre recorded videos, maybe with some quizzes, but you were thinking anything that's online, so that could include live Zoom call interactions and that electronic or online learning modality.

Will Thalheimer: Absolutely. Yeah. Absolutely. 

Tom Griffiths: Yeah. And that's obviously what we anchor on at Ahona. It feels one of the advantages that is kind of lesser known or talked about is that if you have. That online live experience, you can get the benefits of the interaction and the practice in the moment, as we talked about, um, earlier being important, but then because it's convenient versus like flying everybody to a hotel and doing an annual couple of days training offsite with the online modality, you can do it every week, so you can do it for an hour every week, you get that spaced benefit of space practice, spaced repetition.

And so we've always tried to capture that. benefit through, you know, reinforcements, class after class, week after week, as well as in the class itself for, for interaction. So I agree. I think you can tease out the advantages and actually do better than the physical classroom. 

Will Thalheimer: You know, I don't know what traffic is like where you are.

When I'm in the Boston area, it's crazy. And nobody wants to be in a traffic jam for an hour to get to your training program and an hour to get back home. And there's also, you know, there's, you know, we're a lot of people are thinking about climate change and flying people to a training is really not the best thing for the planet.

Right. So yeah. 

Tom Griffiths: Agreed. And you're being off the floor, as they say, for a couple of days, the time's also disruptive. So yeah, we're 

Will Thalheimer: fully, uh, I don't, I don't. I don't want to say that I would do everything online all the time. I mean, there are reasons to be in person. Like, if you're teaching somebody how to Uh, fix some kind of plumbing system or something, you know, they need hands on experience and if they can if they have the equipment at home or wherever they are to use it, that's great.

But sometimes they need to be there. And there's also some benefits of in person stuff for other reasons for bonding for, you know. You know, talking about emotional things, you know, so I don't I don't think we want to do one and not the other, but we really ought to think about what do we need? What tools best for it?

Yeah, 

Tom Griffiths: agreed. And we recommend to customers. You can do both. You don't have to pick say there's an annual off site, even in a remote team or team with multiple offices. There can be some learning experience as part of that. Yeah. Multi day agenda for the whole company, and that can sometimes be a great kickoff to a learning experience that then is followed up with a real skill building and reinforcement happens over time online.

So you really can get the best of both worlds with the social aspect in person. And then the, you know. The reinforcement 

Will Thalheimer: online. I was just going to say, let me get meta for a second. You can, the things we've talked about already, and I know we're going to talk about more things. The things we talked about already show that there's a lot of subtleties and complexities to the job of being a learning designer, right?

Yep. And, you know, a lot of times. Our organizational stakeholders don't understand that there's this body of knowledge out there. There's a lot of wisdom you have to have in building learning. And they tend to push us, you need more content, you need more content. But, you know, if we can sort of be aware of the science and beware of the complications and the complexities, and make good choices, then everybody's going to be better off.

Tom Griffiths: Mm hmm. Yeah, right on. Totally agree with that. And so I'd be curious, do you have specific advice for those in the L& D field who are looking to implement more research based practices or bring more learning science into their toolbox as they go about designing their learning experiences? 

Will Thalheimer: Oh, absolutely.

So one, just be motivated to do it. You know, decide that you want to do this. It's a journey. You're not going to learn everything in a day, right? And number two, gather a team around you that wants to do it as well. Um, it's, you know, if you're, if you're the lone, I've talked to a lot of people in our field over the years, and those who want to do best practices sometimes feel like they're the only one, they're a lone wolf, and that gets stressful, burnout, etc.

So gather people around you. If you can't find them in your own company, find them in, you know, places like the Learning Development Accelerator, which is a membership organization of people. I'm interested in L& D, and particularly research based best practices, the Learning Guild, et cetera, wherever you, you know, there's some local places as well, but gather people so that you're not doing this by yourself.

And then third, I would recommend that people read The research translators. Mm hmm. You don't need to read the research on your own In fact, I kind of think it's a bad idea if you don't know what you're doing what you're doing It's like anything else you need need experience and expertise when you're reading a research study You need to know things like well, I can't just read one research study.

I've got to read like at least 12 20 around this one topic, I need to sort of understand the field of cognitive psychology. I need to have a lot of background. I need some something in, I need to know about statistics and research design. So it takes some expertise to read a real research study, but there are people out there that do this.

They love it. They're experienced at it. So the research translators, people like Julie Dirksen. Clark Quinn, Mirjam Neelen, Karl Kapp, uh, Jane Bozarth. I'm, I'm, I know I'm going to forget, forget some. Patti Shank. These folks do this time after time. They're good at it. Of course, the legendary Ruth Clark, Paul Kirschner, a whole bunch of people.

Those are the people to read and learn from. Yeah, the book make it stick. There's a whole bunch of resources now. In fact, we're in sort of a golden age of non, nonfiction. And so there's a lot of books out there that really break the stuff down to make it resonate for, for us. 

Tom Griffiths: Yeah. I think that's such a great point to, to read the translators.

Um, cause like you say, any isolated research study could have its own flaws. You could jump to premature conclusions. There could be follow on studies that debunk certain findings. Whether, you know, there's a certain limited context in which they're valid. Uh, so I think that's such a, such a great point.

And the names you mentioned, um, many of whom we know, and some of whom we've had on the podcast, uh, are a great starting point, so really encourage folks to, to check those out and we can provide some links in the notes, uh, to, to get people started. So thanks for calling that out. I've got one last question for this first part of our conversation before we jump to part two, and it's the question that, uh, you kind of have to ask in 2023 2024 about AI.

I'm sure you've seen a bunch of different hype cycles over the years, so I'd love to get your take on AI as the next great thing as it applies to learning. 

Will Thalheimer: Yeah, I knew you were going to ask this. Once you started the question, I knew it was going to be about AI. It's been like, uh, one year and one month since November 30th, 2022, since ChatGPT was 3.

5 was announced to the world, given away to the world. And so lots happened since that time and it's getting better. In the beginning, I mostly kept my mouth shut cause I really didn't know enough. My only comment was, Hey, I've been around a while. Every time there's a new learning technology. The first thing we do is screw it up.

We make a lot of mistakes with it. We create some dangerous things. And so, be very careful. So, that's how I got started on that. But then I, then, you know, people that I trust in the field said, Hey, Will, I've actually used it and we're getting good results. And so I began to take it seriously and started studying it on my own.

I took a Prompt engineering course, which you've got to do to get started. I've gone to two conferences now specifically to learn about AI. I've been reading books, listening into podcasts. And, uh, one of the books that I read, which I would highly recommend is the coming wave by Mustafa Suleyman. Um, he is the co founder of DeepMind.

The AI company, uh, that Google, uh, bought, and he's also started another AI company, which I think is called Inflection or something like that. I'm not sure. But anyway, he wrote this book, The Coming Wave, and it's not just about AI. It's also about synthetic biology as well. And what he says, and he was very convincing, and this book is not just about these technologies.

It's really sort of a musing on the history of technology. I thought it was just brilliant. A brilliant read. And what he says, and he's convinced me, is that anytime there's a new general purpose technology, so general purpose meaning like the wheel, or fire, or you know, the internet, the car, iPhones, you know, smartphones, anytime there's a general purpose technology, you're not going to stop it.

It's going to win. People, you know, find value in it. They're going to use it. Organizations are going to be built around creating these, supporting them, getting them out to the marketplace. So you can't, you can't stop these general purpose technologies. Most of them have, you know, good, good parts and bad parts.

Fire is great for cooking your hamburger, but not so great if it burns down your house, right? So what we need to do with AI, he says. is we need to, um, put some guardrails on it. Well, uh, and, uh, his argument is that the only entity that can really do that are national governments. Uh, we don't need to go down that path.

But I'm convinced that AI is going to make a difference. I've been using it myself with some benefits. There are obvious problems with it. There's hallucinations, meaning that it spouts out wrong information. Somebody showed me A query, they'd asked chat GPT about, you know, LTEM, you know, who created LTEM and it was some association.

It wasn't me. Didn't mention me at all. I created it. And not only that, but, but the model that it said was LTEM was like a five level model. It wasn't the model at all. So it clearly makes mistakes. It's getting better. It's not going to be perfect. I do think that it's an opportunity for us in the learning and development field.

Number one, we can use it for our own purposes and we can have some benefits. Number two, though, our organizations are not well, uh, outfitted. You know, most people don't know about these. this technology. They don't know how to use it properly. They don't know how to get the best out of it. They don't know how to make it not problematic.

One of the conferences that I went to had business leaders talking about AI and their biggest fear was like using AI with customers, like making recommendations like financial recommendations or customer service recommendations. And they were definitely afraid that it would tell them to do something that was not You know, in, in the customer's best interest, they were like really afraid.

So clearly as a whole, people in the working world need to learn about this technology. Well, who's better at helping people learn than we, us. So we have a real opportunity here to do that. Well, we can't, there's a, there's a danger though. You know, there's a bunch of people running around saying they're AI experts in the L and D field when they've.

Taken a couple courses or read a book or something. They're not experts. We really need to bring in experts and be careful about that. But I think it's a real opportunity for us. I'm not worried about I know you're going to ask this follow up question. So, you know, what about our jobs? Will do you think it will take our jobs away?

I don't think it will, at least not in enlightened organizations because it's pretty clear. That to use AI usefully, you really have to have a human in the equation who knows what they're doing. So I think if we learn how to use the tool well and we can put those guardrails on it, that it can be a benefit to us as well.

Tom Griffiths: Yeah. Yeah, I totally agree with that take and appreciate how you started with. Going on your own learning journey, um, with this new general purpose technology, I think we've come to similar conclusions at home as we think about how can we harness, you know, recent breakthroughs for the benefits of the company or the customers, and we put it into three buckets.

One was, You know, it can make us more effective and efficient ourselves internally. And, you know, our CTO was the first to say, well, actually, you know, let's put some guardrails around it from a technical perspective so that we're staying the right side of all sorts of data confidentiality agreements, best practices.

But once we've done that, I really do think it's. Over to the team on the front lines to find different uses for it, to your point, you know, learn how to engineer a prompt and find some of the surprising use cases that can help you in your day to day job, rather than us kind of centrally dictate how we use it.

And I know that some of the teams. And on our content side, I've been using it for research and generating ideas, and it can cut down the time it takes to recap the important papers or books in a particular area. And so that, that's been really helpful. Second was how can we use it in the product so that customers can experience, you know, an up leveled learning experience.

And so, you know, we're brainstorming around ways that. Uh, with a chat interface is the way to kind of practice some of the things that you learned in a live online class with. How does an AI counterpart in a chat experience or can it nudge you or can it, you know, add to the learning experience in other ways?

So I'd be curious if you've brainstormed any ways that it can be part of a learning experience as well as part of work? 

Will Thalheimer: Yeah. Well, I've seen, seen people playing with this. I've been in correspondence with someone who's using. Chat GPT to with learning design, and she's really trying to teach herself, like, you know, programming or how to be a developer.

And she tried it out at first, and she's a learning person, and she realized that if she asked Chat GPT to design the learning, it did a terrible job. So she's been using prompt engineering and playing with it, you know, just really trying all these different things out. To make it do a better job. So she's got it to the point where she's got it to utilize the Socratic method more so that it, you know, asked the learner more about, you know, what they're specifically learning needs are, etc.

But there's more for it to do is, you know, basically. Oh, and by the way, I went on to chat GPT this this week. I think it was chat GPT. I've got a bunch of them that I use and it actually, it, it hinted. As it was doing its work, it said, I'm looking up the answers on the Internet, so it's now nudging us to think about how it actually works, which I thought was really good.

Clearly, we're going to have to put some constraints on this because the Internet is full with a lot of good information and a lot of bad information. If we just let, if we try to just let it do our thinking for us, we're going to get in some hot water. I do think there's possibilities of developing a set of prompts that put the guardrails on for specific areas.

You know, these, what are they called, GPTs? Where you can create your own app, basically. I think those have some real opportunities. I've been thinking about, well, how can I do, how can I create one? Based on all the stuff that I've done over the years. One thing I want to experiment with, I'm just having a time lately.

Is using it to, you know, I'm an evaluation guy. So one of the things you do is you ask surveys, survey the learners. And we have used choice questions where we have multiple choice questions. And those are easy to process, but we also have open ended questions. And those are harder to process. But I think, you know, ChatGPT with the right prompt engineering, could be designed to sort of categorize or analyze those responses in a way that would be Quick and easy, a lot easier than me going through every answer choice and picking, picking out the representative sample.

So I think that's a big opportunity. Yeah. 

Tom Griffiths: Yeah. I totally agree with that. That's something we're playing with as well. It's from all of the tens of thousands of responses we get that are open ended. How can you do a couple of things? You can summarize them for, you know, the, your partner on the customer side to say, here are some of the themes that are coming up in learning the, it's good.

to be aware of, and you can also use some of those open ended Questions at a learner level to learn about what they're struggling with and feed that into the algorithm for recommending what can come next. And so you can bring a lot more intelligence to the learning needs beyond just multiple choices we've used in the past.

So, yeah, I agree with you there. And then, you know, from our side, the third part was just thinking about how we can help to educate the space about AI. And so, yeah, I thought it was a great point you made about just. Taking the time to do that for each of us. And as learning professionals, this as our companies come to learn a new technology, of course, we're right there to help them with it.

So that was a great 

Will Thalheimer: thought. I think, you know, the three things you're talking about seem like good things to do. And I think it's being good role models and that we all need to learn about this because it's a, yeah, you can, you can get around in the world without a smartphone. But eventually, most of us learned about how to use a smartphone.

AI is just like that, right? Same with the Internet and email. Remember, we didn't want to do all that. I didn't want to carry around. I didn't want to use PowerPoint. I wanted to use my old slides, you know, but eventually you have to do it. So. Those of you in L& D, if you're listening to this, you know, get going.

You've got to start learning about AI. First thing to do is take a prompt engineering course. You can even find ones, you know, free online in Coursera, et cetera. That's great. 

Tom Griffiths: Yeah. Just watch out for its definition of L TEM. We're going to set the record straight. Great. Well, thanks Will. Really appreciate this first part of the conversation.

Looking forward to part two. Thanks for listening to Learning Works. If you've enjoyed today's conversation, we encourage you to subscribe to the podcast for our exciting lineup of future episodes. Learning Works is presented by Hone. Hone helps busy L& D leaders easily scale power skills training through tech powered live cohort learning experiences that drive real ROI and lasting behavior change.

If you want even more resources, you can head to our website, honehq. com. That's H O N E H Q dot com for upcoming workshops, articles, and to learn more about Hone.

Tom Griffiths: This is Learning Works, a podcast presented by Hone. It's a series of in depth conversations with L& D experts, HR leaders, and executives on how they've built game changing learning and development strategies, unleashed business growth through talent development, and scaled their global L& D teams. Tune in for the wisdom and actionable insights, the best in the industry.

I'm Tom Griffiths, CEO of Hone. Welcome to Learning Works.

Hello, everyone. Welcome to Learning Works. Today, our guest is Dr. Will Thalheimer, a world renowned L& D practitioner, researcher, author, and thought leader. For most of us in the L& D community, he needs no introduction, but a little bit about Will. If you haven't heard of him, Will has a PhD in educational psychology from Columbia.

And in his almost 40 years of experience in L& D, he's made some incredible contributions to our understanding of how people learn and take that learning for performance at work, for which he's been recognized with the coveted Guild Master Award presented by the Learning Guild. He is the creator of the LTEM, Learning Transfer Evaluation Model, something that's really inspired our work here at Hone, and is the author of a few books, including the award winning, Performance Focused Learner Surveys, as well as the forthcoming CEO's Guide to Training, Learning, and Work.

Will, thank you so much for joining us today. 

Will Thalheimer: It's my pleasure to be here. 

Tom Griffiths: Really appreciate the time. As you know, we've been a fan of your work for many years here at Hone. So I'm sure it'll be a great conversation. I really appreciate how you really bridge the research world and bring that into actionable methodologies for us in the corporate learning space.

With many. Aspects of learning research can often focus on kind of early childhood and, you know, young adult education, but you've really specialized in the corporate space. Curious what first drew you into the world of learning and development in a business setting? 

Will Thalheimer: Well, it probably goes back to my days as a MBA student, master's in business administration at Drexel University in Philadelphia.

And I was taking business courses. I thought, you know, business degree would be good to get, you know, you could use it for a lot of things. But as I was taking those courses, they were good. They were useful, but I wasn't really inspired and So I start looking around, and I found at Drexel a four course sequence on instructional design.

And I started taking those courses, and I go, this is it. This is what I want to do. And so as I was getting ready to graduate, I started looking around. Actually, a funny story. I had done my master's project on building a simulation, and so I wanted to, like, use simulations to help learning. And so I was paging through Training Magazine, and there it was on the page, an advertisement for the world leaders in business simulation.

And I said, Oh, I want to work for them. Where are they located? It was like four blocks from my house. So the very next day I got on my one suit, I went down there and I knocked on their door and I said, I want to work for you. And the rest is history. I started as an instructional designer there. Yeah.

That's how I got started. That's 

Tom Griffiths: fantastic. Yeah. I mean, simulation is such a compelling way to learn. What were the kind of simulations you were building in those days? 

Will Thalheimer: Well, we, uh, as a company, we focused on a finance and strategy, but I got involved very quickly on building leadership simulations. We had a leadership guy, Dennis Cohen, and we had an idea that let's build a simulation to teach management, to teach leadership skills.

And so we had to figure this out all on our own. This was a long time ago, back in the mid eighties. But great, I ended up being the project manager on that, and lots of stories there. Some harrowing moments, because we were creating something from scratch. Yeah. But it worked out, and it was really, really lots of fun.

Tom Griffiths: Yeah, no, that's great. And it's awesome that the career launched there. And I know we're going to talk a lot about the research that you've been able to harness for learning design in a corporate environment and much of your work in the measurement space. Be curious if you can bring us up to the moment and tell us what are you working on right now and what is it that's exciting to you about that?

Will Thalheimer: I'm rebooting a work learning research, which is my consulting practice. Yeah. And you know, that's always fun to restart, to be in startup mode as an entrepreneur, the excitement and the terror at the same time. Also working on what I call the L TEM bootcamp. L TEM is a learning evaluation model. And a bootcamp bootcamp is a workshop on it.

It's a three week program starting next week. So I'm very excited about that. Very focused on that. the other thing that we're working on right now, which I'm very excited about is my new book, the CEO's guide to training e learning and work, empowering learning for a competitive advantage, that's going to be coming out in a couple of months.

And there's a bunch of little things to do on that, but very excited that it's. Finally done after about five years. 

Tom Griffiths: That's great. Yeah, no, we're really looking forward to that as well. I find that so many of the conversations that we have, both with customers and with practitioners here in the space are about how to get leverage with the C suite and, you know, in particular CEO.

And I think opening. Leadership eyes to the necessities and best practices around learning is a real difference maker for the space. So excited for that. We, you know, we've planned our conversation to, to go elsewhere today, but wondered if there was any kind of top level headlines or principles you wanted to call out from, from the new book, or would we, you want to keep us in eager anticipation?

Well. 

Will Thalheimer: In essence, the book is, it's got really short chapters, and then there's a lot of chapter notes, you know, with the research backing behind it. I wanted to keep it really sleek and so people could get through it, read it, really, you know, resonate with it. But the basic premise of the book is to, how do we empower ourselves in the learning space?

We do good work, we do important work, but we know that we could be doing more if there weren't some roadblocks. Um, part of the roadblocks are thrown up because, uh, senior management doesn't really understand what we do. It doesn't understand our leverage points and some of the roadblocks we, we build ourselves, frankly.

And so I'm hoping with this book to not only to communicate with, uh, CEOs and C suite folks, but also with the learning folks as well. 

Tom Griffiths: Yeah, yeah, that's great. The more that we can do to build a shared understanding between those two parts of the organization, the more successful learning initiatives can be, and the more budget they can get, and the more richness we can bring to people's development.

So I think that's a really admirable mission for the book. Looking forward to it. 

Will Thalheimer: Thanks. Well, yeah, appreciate that. You've 

Tom Griffiths: also described your mission is bridging the gap between learning and research. And so be curious on your take. How wide is that gap today? And how has it changed over your career?

Will Thalheimer: Yeah, when I started work learning research the first time back in 1998, the gap was huge. In fact, that's why I got started. I said, Oh, somebody needs to be in there bridging the gap. We've got all these smart learning professionals. We've got all these learning researchers. They're never talking to each other.

Somebody needs to get in there and sort of translate through that. So, uh, over the years, in the beginning, it was a little frustrating, didn't see much progress, but over the last 5, 5 or 10 years, I see a lot of progress. People are Number one, they're avoiding some learning myths that are out there, like learning styles, etc.

And also, they're doing a lot more with things, you know, sort of, sort of fundamental learning factors, like retrieval practice and the spacing effect, etc. So, I think we're doing pretty well as a field. We're getting more and more. Now, it's not, not everybody's doing this. One of the, one of the things about the learning space is that we have new People arriving every year and unfortunately there's not always we don't educate them properly or enough And so there tends to be some gaps But for the most part those of us who are in leadership positions are doing more and more with a research based best practices So I've been very pleased over the last five years.

It's great. 

Tom Griffiths: Yeah, I totally agree and I think You can take a lot of credit for disseminating a lot of scientific thinking in the space, debunking myths, which we can talk about, and not just kind of criticizing the work that is being done, but actually providing real actionable methodologies and solutions.

that are grounded in research in a way that perhaps more historical or intuitive approaches are not. So, you know, I know we're, there's a field appreciative of that. When I came to the field six or seven years ago, I saw the same thing in the sense that it felt like You know, there's this big gap in skills, certainly in management and leadership that needed to be filled.

And the good news was we knew what those skills were in terms of, you know, frameworks for doing certain things like coaching or difficult conversations. And we knew that, you know, these are the best practices in adult learning from the research, but it did feel like they hadn't quite synthesized together in a, You know, in a productized way, and so my background as a product person, that was part of the contribution we've been trying to make over the years here at Hone is just bringing together those best practices in a productized solution so that people don't have to, you know, look at the research themselves or build their own evaluation tools, you know, it's all built in.

And yeah, I've appreciated your time. your collaboration on all of that. Just curious, what are some of the standout research findings that you feel are underappreciated or underutilized in the field today 

Will Thalheimer: still? You know, one of the, one of the biggest flaws that is still prevalent, it's getting better.

One of the biggest flaws in our training efforts is that we tend to teach too much. We tend to broadcast, and it's all about, you know, content, content, you know, broadcasting this information, and when you do that, you lose your learners. Number one, they sort of get overwhelmed with it, but you're not. When we do that, we're not really reinforcing.

We're not giving them practice, we're not giving them support and thinking about how to use it in the real world. We're not giving them kind of the sort of cognitive supports that are going to help them remember. So, you know, one of the, one of the things that we can still do more of is support people in remembering.

And the practical things we can do there is to give them realistic practice. Think about what their real tasks are, what their real decisions are in their work, and give them a lot of practice on those things. And if we can space that over time, That's even better spacing repetitions of content over time and not rote repetitions because those don't work that well, but you know, interesting examples, different situations.

If you get people to be able to, you know, practice those or think about those spaced over time, that's much more effective. So those are some of the things that are really sort of the gaps. And so some of the things that we can do to fix them. 

Tom Griffiths: Yeah, absolutely. I mean, it's so crystal clear from the research that that not just space repetition, as you say, but space usage or space recall, um, has this dramatic effect because the brain is learning what knowledge is useful.

And if you remind it that this particular piece of knowledge or technique or skill is useful, then it, you know, it will retain it because it's being used and doing that over a period of time and space, like you say, it's just. Proven time and time again to scientifically be, you know, improvement. I know one of your fun activities is debunking learning myths.

We see a lot of that online as a service to the field. What are some of the biggest learning myths that have been debunked by research? 

Will Thalheimer: Well, you know, I mean, the I have I have I am known as one of the debunkers. I started a thing called the debunker club a number of years ago, but I actually am finding I don't need to do any debunking anymore because it's really picked up.

The whole field is doing it now. So people put out the, um, something about learning styles on LinkedIn. There's a whole bunch of people that helpfully say, Hey, you know, you should know that the Scientists have looked at this and it really doesn't work that well. So learning styles is one of those. 

Tom Griffiths: Uh, and that's being, I'm a visual learner.

I'm an auditory learner. Um, but there's, yeah, those 

Will Thalheimer: are the most common way people think about it is that there's many different, um, researchers have actually looked at like 60 different types of learning styles, frameworks and found none of them work that well. And so, yeah, but that, but that's the, the idea is that if we know.

Somebody's learning style, then if we design our learning to meet that learning style that they'll do better. So if they're a visual learner and we give things visually, they'll do better. If they're an auditory learner and we give them, give it to them with audio, they'll learn better. That, that doesn't work.

It doesn't work that well. And instead of doing that, we should be using, and a lot of people, when they, if they sort of embrace the learning styles notion, when they do their learning designs, they're spending a lot of time and energy developing exercises for that, they would be better off Spending them, providing space, practice, retrieval, good feedback, et cetera.

Tom Griffiths: Yeah. Make sense. And so if someone was to say, well, different people learn in different ways. What validity is there to that? Even if it's not kind of as delineated as a learning 

Will Thalheimer: style. You usually, what we're trying to do is to match the medium with the thing we're learning. Right. So if we're learning about music theory, then audio is probably a great thing.

If we're teaching a surgeon how to, you know, cut through skin or something like that, then we want it to be both visual and kinesthetic, right? Those are the most important things. So, we really want to match the needs of the, the learning. Or the performance with a learning, uh, yes, learners have preferences, but you know, some, you know, there's a ton of research and I, I actually have a chapter on this in the new book.

There's a ton of research to show that learners don't have good intuitions about learning or don't always have good intuitions about learning. And so if we sort of rely on what they think they need, then we're probably going to be designing learning that's not as effective as it could be. Mm hmm. Mm 

Tom Griffiths: hmm.

Yep. No, that makes sense. And I guess, you know, there's the whole personalization angle, but often that can be more to do with their state of knowledge or level of proficiency. 

Will Thalheimer: You're absolutely right. You know, we should personalize based on where, where they are cognitively, what they know, uh, what their motivations are, et cetera.

Um, but that's a different thing than learning styles. . 

Tom Griffiths: Right, right. Yep. No, that makes total sense. So match the modality to the task and if possible, match the tuition or the, the point in which you're starting the learning experience to the level of knowledge or proficiency of the the learner. Another, I think, myth that you have debunked, if my memory serves me correctly, is the 70 20 10 rule, which often gets thrown at me when they say, well, if you provide formal training, that's actually only 10 percent of what people learn.

And so I can get 90 percent without doing any of that. And so we'd love to hear your latest thinking on 70 20 10. 

Will Thalheimer: So, well, the folks that gave us 702010, not the original researchers, but those who have been sort of out there Advocating for it even they don't even they have stepped back from it, but the the the message they were trying to send and I've had a lot of conversations with Yossaret's on this.

He's a really good guy from the Netherlands, and he was one of the ones that was talking about 70 2010, but the, what they were advocating for was, hey, formal training isn't the only way that people learn. Which is obvious, right? And so, their argument is that we should try to leverage, as much as we can, other ways of learning.

Particularly learning in the workflow. Right? Now, oftentimes, we sort of, um, Oversimplify when we try to do that. So we say, Oh, learning in the workflow. Oh, we're going to have a brown bag lunch, or we're going to have communities of practice. And you know, some of that's good, but that's not really as much as we could be doing by learning in the workflow.

So for example, if we want people to be more creative. Well, we need to think about how to support them in being creative. People tend to think that they should brainstorm as a group, and the research shows that you should brainstorm individually, and that's better. If you want to, you know, get more ideas from your team as a leader, you might, at the end of every project, at the end of every meeting, say, Hey.

What did we do well? What did we not do well? And what do we want to change for next time? And that creates a way of learning for a group. It also sort of opens it up and makes it not about hierarchy and status, but it makes it about, Oh, what are we trying to do? What are we trying to learn? Everybody here can have good ideas.

And by doing these kinds of things, this is where we really learn powerfully in the flow of work. So the 70 20 10 was based on really limited research and the number, you know, they found that the numbers should change, etc., depending on what you're looking at. But the idea that we should try to go beyond formal training, um, is a good one.

Not that we should get rid of formal training, but that we should be able to augment it with other ways of learning as well. Right, 

Tom Griffiths: right. Which, which is a qualitative version of the point makes total sense. I've just seen it used sometimes to say, well, we don't actually need formal training because so much learning can happen on the job or as part of, you know, mentor mentee relationship.

What would you say to that idea of kind of eliminating formal training? 

Will Thalheimer: It's a dumb idea. I mean, it's, it's done for a bunch of reasons. Number one, sometimes if you're going to do this workflow learning. You need to be trained on how to do it, right? You know, some of it's, uh, intricate or complicated. Or you might need to be motivated about it, you know, to say, Hey, so that's one thing.

You know, the other thing is sometimes to learn, we really do need to be away from distractions. Right. We need to have time away. We need to have time with other people to think about things. We need to have experts there that we can ask questions of. There's a lot of really good reasons. Why? I mean, you just look, just look at the success of MOOCs, right?

You know, courses you take online with a whole bunch of other people in a self study format. Well, not a lot of people finish those, and they're probably, you know, their motivation lags, they're distracted, they're not bonding with others. They don't feel responsibility to, you know, an instructor or to other learners.

So, you know, clearly that doesn't work that well. So people don't always learn that well on their own. 

Tom Griffiths: Yeah, exactly. You know, you can obviously see examples. If I'm coding and I encounter a new library or something that I need to learn about in that moment, I can do that as I'm going along. But there are other, you know, Computer science concepts that I need foundationally to be able to be an effective engineer.

And it's good to learn that in a formal training setting. Sometimes I think of pilots where, you know, you wouldn't want the pilot flying a plane to have to look up the blinking red light in, you know, 200 page page manual on the fly to figure out what happens. Like, you know, there's been some formal training there, thankfully, as well as ongoing development.

So you need both. Yeah. That 

Will Thalheimer: makes sense. Yeah. And you know, that's a, that's a crazy idea. Oh, you don't have to learn anything. You can just look it up when you need it. You know, the sort of Google idea or YouTube idea. Well, that's nuts. Right. I mean, Salesperson on a sales call, right? They can't just stop in the middle of it and say, Wait a minute.

I need to go check my I need to go watch a YouTube video. You know how to deal with this injection. Yeah. Yeah. You know, a lot of us working in groups, right? So there's no time to sort of stop and do things. And oftentimes we have to be fluent. In our responses, so, you know, that idea that you can look things up is not, not a good one.

And, you know, half the Internet is filled with bad information. You have to have at least some area, some knowledge so that you know what you're looking at is good or not. So, and the other thing is, people learn better when they know something already. Yeah. I'll give you an example from my consulting practice.

If I'm teaching about learning sciences, how to build better learning interventions, or I'm teaching about learning evaluation, I used to just go out and do a learning audit, and I would dump the recommendations on people, or do a learning evaluation study and just tell them what I found. But I learned over the years the hard way that that doesn't work that well.

If I have a team together and I'm talking about something, learning sciences or learning evaluation, and I talk, you know, and I share a recommendation, you know, half the, you know, half the people resonate with it and the other people are, why are you, why are you recommending that? That's a crazy idea. Then you get in this big discussion.

We in the learning field come from so many different backgrounds and perspectives. We're not always on the same page. So I always. require like an evaluation background, or if I'm talking about evaluation or a learning science background, or before we talk about learning, because you have to have that fundamental knowledge before you can think about things clearly before you can make decisions.

Yeah, 

Tom Griffiths: so in a way you qualify to be in the conversation in some sense. Makes sense. And so if we talk to modalities a little bit and I guess this plays in somewhat to the learning in the flow of work versus formal training, but I'm curious what does the research say about the big debate between live instructor led training?

And on demand e learning. 

Will Thalheimer: So, I, I looked into this a number of years ago, because it's a question that comes up over and over. Yeah. And what I found by looking at the research is that If you use the same learning method in a different modality, then there's not going to be a difference. So in other words, if you use an animation online and an animation in the classroom, there's going to be no difference in the amount of learning.

If you show a video. online and a video in the classroom, there's going to be no difference. So, and, and this was found by Richard E. Clark back in 1983, did a lot of, you know, the famous studies on this saying that it's not the modality that makes a difference. It's the, the learning method that you're using.

Now, that being said, sometimes the, the technology allows you to do different things, right? So if, if I'm, if I'm online and I can get people to go through a simulation, that's going to be better than if I'm in a classroom where people don't have access to the IT system I'm trying to do. So yeah, there's certainly, certainly, uh, some value in different things, but here's the other thing.

If you just. Say, build the best e learning course you can. Build the best classroom training course you can. What the research shows is that e learning actually tends to outperform, uh, classroom training. And the reason for that is because those who are designing e learning know that they can't just broadcast information.

That they have to have some interaction. They have to give some practice. Um, they have to make it interesting. Both can be better designed, both e learning and classroom training. But if you just leave it out in the wild there, then e learning tends to outperform classroom training. The other thing I think that we should be cognizant of is that we're in an amazing age of innovation in e learning.

Right. And when I say e learning, I'm talking about all online learning. I was talking to somebody from Australia and I was using the word e learning. And, Will, e learning here in Australia just means page turners. Oh no, that's not what I mean. But we're in a, you know, we're just, you know, we had COVID and nobody thought we could actually teach online.

And then we all taught online and it worked out pretty well, except for kids in school. And there's new systems all the time. There's new learning programs. We're going, we're getting better and better at it. And I think we'll continue to do that. Yeah, 

Tom Griffiths: I agree. And it is a bit of a fuzzy definition. So, as I was asking the question, e learning in my mind was, you know, the pre recorded videos, maybe with some quizzes, but you were thinking anything that's online, so that could include live Zoom call interactions and that electronic or online learning modality.

Will Thalheimer: Absolutely. Yeah. Absolutely. 

Tom Griffiths: Yeah. And that's obviously what we anchor on at Ahona. It feels one of the advantages that is kind of lesser known or talked about is that if you have. That online live experience, you can get the benefits of the interaction and the practice in the moment, as we talked about, um, earlier being important, but then because it's convenient versus like flying everybody to a hotel and doing an annual couple of days training offsite with the online modality, you can do it every week, so you can do it for an hour every week, you get that spaced benefit of space practice, spaced repetition.

And so we've always tried to capture that. benefit through, you know, reinforcements, class after class, week after week, as well as in the class itself for, for interaction. So I agree. I think you can tease out the advantages and actually do better than the physical classroom. 

Will Thalheimer: You know, I don't know what traffic is like where you are.

When I'm in the Boston area, it's crazy. And nobody wants to be in a traffic jam for an hour to get to your training program and an hour to get back home. And there's also, you know, there's, you know, we're a lot of people are thinking about climate change and flying people to a training is really not the best thing for the planet.

Right. So yeah. 

Tom Griffiths: Agreed. And you're being off the floor, as they say, for a couple of days, the time's also disruptive. So yeah, we're 

Will Thalheimer: fully, uh, I don't, I don't. I don't want to say that I would do everything online all the time. I mean, there are reasons to be in person. Like, if you're teaching somebody how to Uh, fix some kind of plumbing system or something, you know, they need hands on experience and if they can if they have the equipment at home or wherever they are to use it, that's great.

But sometimes they need to be there. And there's also some benefits of in person stuff for other reasons for bonding for, you know. You know, talking about emotional things, you know, so I don't I don't think we want to do one and not the other, but we really ought to think about what do we need? What tools best for it?

Yeah, 

Tom Griffiths: agreed. And we recommend to customers. You can do both. You don't have to pick say there's an annual off site, even in a remote team or team with multiple offices. There can be some learning experience as part of that. Yeah. Multi day agenda for the whole company, and that can sometimes be a great kickoff to a learning experience that then is followed up with a real skill building and reinforcement happens over time online.

So you really can get the best of both worlds with the social aspect in person. And then the, you know. The reinforcement 

Will Thalheimer: online. I was just going to say, let me get meta for a second. You can, the things we've talked about already, and I know we're going to talk about more things. The things we talked about already show that there's a lot of subtleties and complexities to the job of being a learning designer, right?

Yep. And, you know, a lot of times. Our organizational stakeholders don't understand that there's this body of knowledge out there. There's a lot of wisdom you have to have in building learning. And they tend to push us, you need more content, you need more content. But, you know, if we can sort of be aware of the science and beware of the complications and the complexities, and make good choices, then everybody's going to be better off.

Tom Griffiths: Mm hmm. Yeah, right on. Totally agree with that. And so I'd be curious, do you have specific advice for those in the L& D field who are looking to implement more research based practices or bring more learning science into their toolbox as they go about designing their learning experiences? 

Will Thalheimer: Oh, absolutely.

So one, just be motivated to do it. You know, decide that you want to do this. It's a journey. You're not going to learn everything in a day, right? And number two, gather a team around you that wants to do it as well. Um, it's, you know, if you're, if you're the lone, I've talked to a lot of people in our field over the years, and those who want to do best practices sometimes feel like they're the only one, they're a lone wolf, and that gets stressful, burnout, etc.

So gather people around you. If you can't find them in your own company, find them in, you know, places like the Learning Development Accelerator, which is a membership organization of people. I'm interested in L& D, and particularly research based best practices, the Learning Guild, et cetera, wherever you, you know, there's some local places as well, but gather people so that you're not doing this by yourself.

And then third, I would recommend that people read The research translators. Mm hmm. You don't need to read the research on your own In fact, I kind of think it's a bad idea if you don't know what you're doing what you're doing It's like anything else you need need experience and expertise when you're reading a research study You need to know things like well, I can't just read one research study.

I've got to read like at least 12 20 around this one topic, I need to sort of understand the field of cognitive psychology. I need to have a lot of background. I need some something in, I need to know about statistics and research design. So it takes some expertise to read a real research study, but there are people out there that do this.

They love it. They're experienced at it. So the research translators, people like Julie Dirksen. Clark Quinn, Mirjam Neelen, Karl Kapp, uh, Jane Bozarth. I'm, I'm, I know I'm going to forget, forget some. Patti Shank. These folks do this time after time. They're good at it. Of course, the legendary Ruth Clark, Paul Kirschner, a whole bunch of people.

Those are the people to read and learn from. Yeah, the book make it stick. There's a whole bunch of resources now. In fact, we're in sort of a golden age of non, nonfiction. And so there's a lot of books out there that really break the stuff down to make it resonate for, for us. 

Tom Griffiths: Yeah. I think that's such a great point to, to read the translators.

Um, cause like you say, any isolated research study could have its own flaws. You could jump to premature conclusions. There could be follow on studies that debunk certain findings. Whether, you know, there's a certain limited context in which they're valid. Uh, so I think that's such a, such a great point.

And the names you mentioned, um, many of whom we know, and some of whom we've had on the podcast, uh, are a great starting point, so really encourage folks to, to check those out and we can provide some links in the notes, uh, to, to get people started. So thanks for calling that out. I've got one last question for this first part of our conversation before we jump to part two, and it's the question that, uh, you kind of have to ask in 2023 2024 about AI.

I'm sure you've seen a bunch of different hype cycles over the years, so I'd love to get your take on AI as the next great thing as it applies to learning. 

Will Thalheimer: Yeah, I knew you were going to ask this. Once you started the question, I knew it was going to be about AI. It's been like, uh, one year and one month since November 30th, 2022, since ChatGPT was 3.

5 was announced to the world, given away to the world. And so lots happened since that time and it's getting better. In the beginning, I mostly kept my mouth shut cause I really didn't know enough. My only comment was, Hey, I've been around a while. Every time there's a new learning technology. The first thing we do is screw it up.

We make a lot of mistakes with it. We create some dangerous things. And so, be very careful. So, that's how I got started on that. But then I, then, you know, people that I trust in the field said, Hey, Will, I've actually used it and we're getting good results. And so I began to take it seriously and started studying it on my own.

I took a Prompt engineering course, which you've got to do to get started. I've gone to two conferences now specifically to learn about AI. I've been reading books, listening into podcasts. And, uh, one of the books that I read, which I would highly recommend is the coming wave by Mustafa Suleyman. Um, he is the co founder of DeepMind.

The AI company, uh, that Google, uh, bought, and he's also started another AI company, which I think is called Inflection or something like that. I'm not sure. But anyway, he wrote this book, The Coming Wave, and it's not just about AI. It's also about synthetic biology as well. And what he says, and he was very convincing, and this book is not just about these technologies.

It's really sort of a musing on the history of technology. I thought it was just brilliant. A brilliant read. And what he says, and he's convinced me, is that anytime there's a new general purpose technology, so general purpose meaning like the wheel, or fire, or you know, the internet, the car, iPhones, you know, smartphones, anytime there's a general purpose technology, you're not going to stop it.

It's going to win. People, you know, find value in it. They're going to use it. Organizations are going to be built around creating these, supporting them, getting them out to the marketplace. So you can't, you can't stop these general purpose technologies. Most of them have, you know, good, good parts and bad parts.

Fire is great for cooking your hamburger, but not so great if it burns down your house, right? So what we need to do with AI, he says. is we need to, um, put some guardrails on it. Well, uh, and, uh, his argument is that the only entity that can really do that are national governments. Uh, we don't need to go down that path.

But I'm convinced that AI is going to make a difference. I've been using it myself with some benefits. There are obvious problems with it. There's hallucinations, meaning that it spouts out wrong information. Somebody showed me A query, they'd asked chat GPT about, you know, LTEM, you know, who created LTEM and it was some association.

It wasn't me. Didn't mention me at all. I created it. And not only that, but, but the model that it said was LTEM was like a five level model. It wasn't the model at all. So it clearly makes mistakes. It's getting better. It's not going to be perfect. I do think that it's an opportunity for us in the learning and development field.

Number one, we can use it for our own purposes and we can have some benefits. Number two, though, our organizations are not well, uh, outfitted. You know, most people don't know about these. this technology. They don't know how to use it properly. They don't know how to get the best out of it. They don't know how to make it not problematic.

One of the conferences that I went to had business leaders talking about AI and their biggest fear was like using AI with customers, like making recommendations like financial recommendations or customer service recommendations. And they were definitely afraid that it would tell them to do something that was not You know, in, in the customer's best interest, they were like really afraid.

So clearly as a whole, people in the working world need to learn about this technology. Well, who's better at helping people learn than we, us. So we have a real opportunity here to do that. Well, we can't, there's a, there's a danger though. You know, there's a bunch of people running around saying they're AI experts in the L and D field when they've.

Taken a couple courses or read a book or something. They're not experts. We really need to bring in experts and be careful about that. But I think it's a real opportunity for us. I'm not worried about I know you're going to ask this follow up question. So, you know, what about our jobs? Will do you think it will take our jobs away?

I don't think it will, at least not in enlightened organizations because it's pretty clear. That to use AI usefully, you really have to have a human in the equation who knows what they're doing. So I think if we learn how to use the tool well and we can put those guardrails on it, that it can be a benefit to us as well.

Tom Griffiths: Yeah. Yeah, I totally agree with that take and appreciate how you started with. Going on your own learning journey, um, with this new general purpose technology, I think we've come to similar conclusions at home as we think about how can we harness, you know, recent breakthroughs for the benefits of the company or the customers, and we put it into three buckets.

One was, You know, it can make us more effective and efficient ourselves internally. And, you know, our CTO was the first to say, well, actually, you know, let's put some guardrails around it from a technical perspective so that we're staying the right side of all sorts of data confidentiality agreements, best practices.

But once we've done that, I really do think it's. Over to the team on the front lines to find different uses for it, to your point, you know, learn how to engineer a prompt and find some of the surprising use cases that can help you in your day to day job, rather than us kind of centrally dictate how we use it.

And I know that some of the teams. And on our content side, I've been using it for research and generating ideas, and it can cut down the time it takes to recap the important papers or books in a particular area. And so that, that's been really helpful. Second was how can we use it in the product so that customers can experience, you know, an up leveled learning experience.

And so, you know, we're brainstorming around ways that. Uh, with a chat interface is the way to kind of practice some of the things that you learned in a live online class with. How does an AI counterpart in a chat experience or can it nudge you or can it, you know, add to the learning experience in other ways?

So I'd be curious if you've brainstormed any ways that it can be part of a learning experience as well as part of work? 

Will Thalheimer: Yeah. Well, I've seen, seen people playing with this. I've been in correspondence with someone who's using. Chat GPT to with learning design, and she's really trying to teach herself, like, you know, programming or how to be a developer.

And she tried it out at first, and she's a learning person, and she realized that if she asked Chat GPT to design the learning, it did a terrible job. So she's been using prompt engineering and playing with it, you know, just really trying all these different things out. To make it do a better job. So she's got it to the point where she's got it to utilize the Socratic method more so that it, you know, asked the learner more about, you know, what they're specifically learning needs are, etc.

But there's more for it to do is, you know, basically. Oh, and by the way, I went on to chat GPT this this week. I think it was chat GPT. I've got a bunch of them that I use and it actually, it, it hinted. As it was doing its work, it said, I'm looking up the answers on the Internet, so it's now nudging us to think about how it actually works, which I thought was really good.

Clearly, we're going to have to put some constraints on this because the Internet is full with a lot of good information and a lot of bad information. If we just let, if we try to just let it do our thinking for us, we're going to get in some hot water. I do think there's possibilities of developing a set of prompts that put the guardrails on for specific areas.

You know, these, what are they called, GPTs? Where you can create your own app, basically. I think those have some real opportunities. I've been thinking about, well, how can I do, how can I create one? Based on all the stuff that I've done over the years. One thing I want to experiment with, I'm just having a time lately.

Is using it to, you know, I'm an evaluation guy. So one of the things you do is you ask surveys, survey the learners. And we have used choice questions where we have multiple choice questions. And those are easy to process, but we also have open ended questions. And those are harder to process. But I think, you know, ChatGPT with the right prompt engineering, could be designed to sort of categorize or analyze those responses in a way that would be Quick and easy, a lot easier than me going through every answer choice and picking, picking out the representative sample.

So I think that's a big opportunity. Yeah. 

Tom Griffiths: Yeah. I totally agree with that. That's something we're playing with as well. It's from all of the tens of thousands of responses we get that are open ended. How can you do a couple of things? You can summarize them for, you know, the, your partner on the customer side to say, here are some of the themes that are coming up in learning the, it's good.

to be aware of, and you can also use some of those open ended Questions at a learner level to learn about what they're struggling with and feed that into the algorithm for recommending what can come next. And so you can bring a lot more intelligence to the learning needs beyond just multiple choices we've used in the past.

So, yeah, I agree with you there. And then, you know, from our side, the third part was just thinking about how we can help to educate the space about AI. And so, yeah, I thought it was a great point you made about just. Taking the time to do that for each of us. And as learning professionals, this as our companies come to learn a new technology, of course, we're right there to help them with it.

So that was a great 

Will Thalheimer: thought. I think, you know, the three things you're talking about seem like good things to do. And I think it's being good role models and that we all need to learn about this because it's a, yeah, you can, you can get around in the world without a smartphone. But eventually, most of us learned about how to use a smartphone.

AI is just like that, right? Same with the Internet and email. Remember, we didn't want to do all that. I didn't want to carry around. I didn't want to use PowerPoint. I wanted to use my old slides, you know, but eventually you have to do it. So. Those of you in L& D, if you're listening to this, you know, get going.

You've got to start learning about AI. First thing to do is take a prompt engineering course. You can even find ones, you know, free online in Coursera, et cetera. That's great. 

Tom Griffiths: Yeah. Just watch out for its definition of L TEM. We're going to set the record straight. Great. Well, thanks Will. Really appreciate this first part of the conversation.

Looking forward to part two. Thanks for listening to Learning Works. If you've enjoyed today's conversation, we encourage you to subscribe to the podcast for our exciting lineup of future episodes. Learning Works is presented by Hone. Hone helps busy L& D leaders easily scale power skills training through tech powered live cohort learning experiences that drive real ROI and lasting behavior change.

If you want even more resources, you can head to our website, honehq. com. That's H O N E H Q dot com for upcoming workshops, articles, and to learn more about Hone.