Bridging the Gap Between Training and Behavioral Change with Julie Dirksen, Part 2

What's covered

Tom Griffiths is joined by Julie Dirksen, a celebrated author and learning strategy consultant. In part two of their discussion, Tom and Julie delve into the practical application of behavior change design principles in L&D initiatives. They discuss how to conduct more effective audience research, ways to measure behavior change, and getting buy in from learners. Julie pulls from her deep knowledge of behavioral science and learning design as showcased in her bestselling book "Design For How People Learn" and her recent work "Talk to the Elephant: Design Learning for Behavior Change."

About the speakers

Podcast_Ep8-JulieDirksen_231220

Julie Dirksen

Author & Learning Strategy Consultant

Julie Dirksen is a celebrated author and learning strategy consultant, who integrates behavioral science principles into the design of impactful learning experiences as showcased in her bestselling book "Design For How People Learn" and her recent work "Talk to the Elephant: Design Learning for Behavior Change."

TomGriffiths-1-300x300-1-1

Tom Griffiths

CEO and co-founder, Hone

Tom is the co-founder and CEO of Hone, a next-generation live learning platform for management and people-skills. Prior to Hone, Tom was co-founder and Chief Product Officer of gaming unicorn FanDuel, where over a decade he helped create a multi-award winning product and a thriving distributed team. He has had lifelong passions for education, technology, and business and is grateful for the opportunity to combine all three at Hone. Tom lives in San Diego with his wife and two young children.

Tom regularly speaks and writes about leadership development, management training, and the future of work.

Episode transcript

Tom Griffiths: This is Learning Works, a podcast presented by Hone. It's a series of in depth conversations with L&D experts, HR leaders, and executives on how they've built game changing learning and development strategies, unleashed business growth through talent development, and scaled their global L& D teams. Tune in for the wisdom and actionable insights, the best in the industry.

I'm Tom Griffiths, CEO of Hone. Welcome to Learning Works. 

we're back with Julie Dirksen. We've talked about her next book on learning design, talking to the elephant, and now we're really going to get practical in terms of how you can use some of the behavior change design principles in your organization. So Julie we said in part one, just how you often think of your work as entry points to a certain field. And so, how would you recommend those in the L& D field looking to [00:01:00] start implementing behavior change design for the first time? What would you give them as foundational advice?

Julie Dirksen: One of the, I think the biggest challenges Is anytime that you're dealing with one of these behaviors, the, they know what to do, but they still aren't doing it.

There's going to be a research question to it. Right? You do need to talk to your users. I pretty regularly will ask groups of people if I'm speaking to them and stuff, how many people get to spend time with their people in their user audience really regularly. And I don't think I've ever gotten above 50 percent in terms of respondents on that.

And that's something that probably is going to have to change. I'm a big fan of job shadowing as a simple method into that go follow people around for a day. A lot of times we hear from people that they're not allowed to pull people off the floor, and especially if you're dealing with, say, medical professionals or something like that, their time is so valuable.

It's really hard to get them in for interviews or get them to respond to surveys or things like that. But finding out if you can go follow people around, because you can always slide questions into the, [00:02:00] conversation, and you get to see a lot about their environments. When you do job shadowing, you get to see where the limitations are and things like that, because we always have beliefs about why the behavior is not happening.

And sometimes, sometimes they're right. Sometimes the subject matter experts know the real reasons why the behavior isn't happening, but there is the issue of, okay, really? Why really? Why isn't the behavior happening? So there's that piece of it is understanding the world in the context that your learners are operating in.

And then the other piece about it is you can't talk to the elephant unless, you know, what the elephant cares about. Because we've often created these incentive systems where people will figure out how to get the incentive, in the fastest and easiest way possible. So, a lot of professional organizations will have a required amount of training that you have to do every year.

And I was working with 1 that I think it was, they were doing like accounting. Continuing education for accountants and somebody told me a story about somebody sitting in the back of the room where they'd open up 2 laptops. They'd be in a training [00:03:00] class live physical training class, and then they'd open up 2 laptops and be going through 2 e learning programs at the same time that they're in this physical classroom.

So, by the end of the day, they could potentially have 24 of their 40 required hours knocked out. Well, you on 1 hand, they're being very efficient about hitting the end goal of 40 hours of a continuous education. On the other hand, they're probably not accomplishing some of the higher order things that we were hoping for from that.

But we do need to understand if people are skipping the behavior, there's probably a reason why they're skipping the behavior. And that reason, even if it's not something that we officially are going to condone, it's super important to understand what it is. And understand, what they care about, because if you formulate any kind of learning that's going to talk to people about why they should do this behavior, if you're not framing it in the terms of things that they care about, or goals that they have, or problems that they need to solve, it's not going to get up into those top five of [00:04:00] priorities.

So, if you don't know what your audience cares about then you can't design effectively for it.

Tom Griffiths: And as we were saying earlier, that's just so contextual and situational that you need to go and have firsthand observation of that, so you can problem solve and debug the puzzle, see what is actually getting in the way rather than adopting some universal theoretical principles.

 I guess that naturally then is going to take a bit more time than quickly dashing off some learning experience based on some principles that you've learned along the way. You actually need to go and do the, shadowing exercise. And so. An L& D professional who's taking that more robust approach will need to justify, a longer or more expensive development process.

So how have you seen or how would you recommend L& D professionals communicate the importance of behavioral change design to their organizations so that they can justify a more robust design process like that?

Julie Dirksen: I think some of the accounting that I think that everybody should look at is what's the cost of an ineffective [00:05:00] solution.

Because typically we'll look at the development cost of the thing. So this project is going to cost, we want to farm it out to a vendor and it's going to cost 75, 000 to build out or, whatever it is, or it's going to take this many hours of our L and D teams effort. And it's going to add up to a certain dollar amount to develop it.

That's the cost that we tend to focus on, and there's been a huge push in learning and development about doing faster and cheaper. And, everybody can create courses and rise without needing a developer and, all that kind of stuff. But one of the costs that I don't think we look at is, okay, if I'm pulling 5000 people away for an hour to do this e learning course.

 And it's not effective at making the behavior change that we want to have. Well, each of those people, if they're 50 an hour, I think that's now oh, shoot, I should have done the math ahead of time. What is that? It's a lot of money. Yeah. Yeah, it's a lot of money in lost productivity of the people taking the course.

And if it's not solving the problem that we're actually trying to solve. So then now we've [00:06:00] wasted the development time and money on it, but we've also wasted a huge amount of productivity time in our learner base. And now we've also got the opportunity cost of whatever cost benefit we would have seen from them actually making the behavior changes also lost to us at that point.

So, if it was going to be like, they were going to serve more customers, or they were going to make more sales, or they were going to have less, customer service calls, or, whatever the benefit that the training would have provided. And so 1 of the things that I think we don't look at very clearly in a lot of these kinds of things is that the development cost is just the tip of the iceberg on a whole slew of other costs that are associated with ineffective solutions, and once you start to do the math on that spending a couple of weeks on actually making sure what problem you're trying to solve and doing some pilot testing or doing some prototype or an iterative testing around, making sure that the solution is actually going to make a dent in the problem.

Those costs really are. Usually fairly small in comparison to the potential [00:07:00] larger costs of, wasted time for all of your learners. Plus the opportunity costs of not making the effective change. Now, can you sell that in your organization? Maybe you can, maybe you can't, but I do think it's a conversation that we should be having because we have a tendency to focus.

I think very much on the wrong costs. We've been very focused as an industry on doing the same thing, but faster and cheaper, rather than doing more effective things that have better impact throughout.

Tom Griffiths: And so perhaps for 5 percent or 10 percent more total investment you could move from like a 50 percent impact to a hundred percent impact.

And it's quite the incremental or marginal ROI on that time. It's a great way to think about it. Of course, embedded in that is. An assumption that some ways we can measure effectiveness or the extent of behavior change. And so this is a topic close to our hearts at Hone. We spend a lot of time thinking about it, but we'd love to get your thoughts on measurement strategies to be most persuasive [00:08:00] or accurate or believable when it comes to measuring behavior change design.

Julie Dirksen: One of the challenges with this, too, and one of the kind of foundational things that you do need to start with whenever we're looking at these behavior changes is to ensure that you do actually have behaviors. If an outcome is more increased client satisfaction, then you can get into what behaviors are going to increased.

Client satisfaction. And sometimes you'll see stuff like, well, we just want people to be more customer focused. I'm like, okay, great. If you're being more customer focused, what does that actually look like? Like literally the question I often ask is if I take a picture or a video of somebody being customer focused, what do I see on this video?

And then they start to be like, well, it's like this, or they're asking questions or they're making sure that they're answering the whole question while the customer's on the phone or, whatever the behaviors are, but that often we're dealing with behaviors that are vague or dealing with outcomes instead of behaviors, any of those kinds of things.

And so the upfront work of making sure that you really [00:09:00] actually have defined these behaviors and we can do certain behaviors that may or may not effect the outcome. So, for example, if the outcome I want is I want people to have lowered blood pressure, behaviors could include increasing exercise or reducing sodium consumption in their diet, or taking their medication regularly. And some of those things may or may not have an impact, right? Sodium consumption is going to impact people who have sodium sensitive blood pressure, but not necessarily everybody does.

And so they could reduce their sodium and still not have an impact on their blood pressure. And so one of the things we do need to decide is, are we holding people accountable for the outcome or for the behavior? So, for example, if you're dealing with like salespeople, their outcome is going to be sales numbers, but the behaviors are going to be things like, doing more consultative selling kinds of things, asking customers about their current set of problems and needs spending more time listening to what they tell them, those kinds of things.

And so. The people can control their behaviors. They can't necessarily [00:10:00] control their out the outcomes from those behaviors. So there's a good argument for saying you should be holding people accountable for doing the behaviors. And if the behaviors aren't producing outcomes, that's your problem is management or the people doing this, the strategy level.

But the nice thing about if we're focused on behaviors and we define them well, is that it does at least get a little bit easier to be able to measure or see if they're happening. And it depends. Some are easier to measure than others. But you will have a very difficult time measuring anything, if you haven't defined those behaviors quite clearly.

Tom Griffiths: Yeah, totally agree. It's the level three in the Kirkpatrick scale versus the level four.

Julie Dirksen: The other thing that I would say about that is one of the things that I do think it comes from that sort of Kirkpatrick view or, the world is we're a little bit, More focused on measuring for an entire audience than I would like to see sometimes.

I think there's a lot to be said for small scale testing and or cohort measurement. So I can't afford to do a really detailed behavioral measurement of my entire audience of 12, 000 nurses, [00:11:00] but can I reasonably create some behavioral measures? And I'm going to try it out with 80 nurses. And actually gather some data about the efficacy of my intervention, even if it isn't for the whole populations, there's still something that I can do and then feed that back into the design process to improve the innovation that we roll out to the much larger audience.

And I think that's something I don't see happening nearly as often as I'd like to see in learning and development where we really do. Pilot testing with good measures, even if I can't afford to do it for everybody, can I do a pilot and really capture some measures and then use that to inform what gets rolled out to the whole population?

Tom Griffiths: Yeah, I love that. And yeah, especially in large populations where it's impossible to continually. Check everyone. You can have confidence that if the test population is representative of the broader population, and that was effective, then you're having a widespread effect when you roll it out. Take that then and map it to perhaps [00:12:00] knowledge work, if we call it that a lot of our listeners would be in technology or mid sized organizations that are less kind of frontline worker style, training environments, what would be some of the ways that you could measure or observe behaviors in that email, Slack, Google doc based environment? What have you seen done well there? 

Julie Dirksen: I think some of the stuff that Microsoft is telling us that we'll be able to do in terms of gathering data from like every Word document that your employees use.

I'm a little bit scared of it. I'll be honest with you. And I do think that when we start to get into some of these things having some ethical guidelines that we're using to consider this starts to become a really important question. I do think it's very important that whatever you're using to evaluate is something that people know about.

And they can understand what behavior they're trying to aim for and stuff like that. I will tell you I'm not the expert, the harvesting large quantities of those data, but that is going to be an increasingly, interesting way that [00:13:00] we can look at this.

Places where people are already measuring some sort of thing customer service issues are always a nice 1 because there's usually some kind of tracking system around those customer service issues. And so if we make some changes here from a training point of view, are we seeing, better resolution after, single call, or are we seeing less customer service issues coming all the way through based on some of those kinds of things. And so we're wherever we can find those measures that tell us a little bit about it. Kathy Moore has a nice one that she asks at the beginning of a project, which is what thing that you are already measuring will be impacted if this training works the way that we want it to.

And the answer to that question is. Sometimes there isn't one, and at least then you can start the conversation there of, okay, if you're not already collecting this data anywhere, how are we going to know if we've moved the needle on this via the training or via the behaviors that people learn from the training.

Right. So I think I just danced around that topic, but you get the idea.

Tom Griffiths: [00:14:00] No, totally. And I think one of the data sources to your point that people are already measuring in these contexts oftentimes is an engagement survey. So if there are particular manager scores or behaviors or cultural measures of inclusion that they're already measuring, then interventions perhaps on a subpopulation to really see the impact first could be a good way to start at least.

Julie Dirksen: There's a question sometimes that therapists use that I always like to, which is this sort of, if you wake up, if it's magically all fixed, how would you know? So they'll be working with a client and the client will be like, well, I want to, fix this thing about my relationships or, fix this about my life or whatever it is. And so then the magical question is, okay, well, if you woke up tomorrow morning, what would be the first indicator in the world where you would know? Oh, look, that's better now. And then you can identify a series of oh, well, I'd wake up and I would feel better or I'd wake up and I'd get out of bed and I'd, actually get out of bed on time and I'd make myself some breakfast or, whatever it is.

And then you can [00:15:00] actually shift the focus of, all right, well, how do we help you do those things that you're saying are representative of what the better world looks like, or what are those indicators? How can we focus in on that? I think there's something nice there around well, if we did have a more customer focused, Salesforce, what would be some of the things that would happen?

Would it be we'd hear more. Positive things back from customers would it be interactions client meetings? What would be the indicators that tell us that the thing we want to have happening is happening? And how can we lean into some of those? 

Tom Griffiths: Yeah, that's really nice.

And it allows for, I think, some qualitative measures to make it make their way into the evaluation as well as just the raw quantitative. Love that question. So, ROI and proving, The quantitative impact of training is a way to get buy in from management. But what would you say are some ways that you need to think about getting buy in from the learners themselves [00:16:00] when it comes to, the behavioral change approaches that we've been discussing?

Julie Dirksen: Everybody can sniff out inauthenticity. The minute that they see it, like they know when it's not a thing. And, sometimes you're asking people to change behaviors that are going to have no clear benefit to them. There's a broader organizational reason why there.

We need to change the way that you do client input or, these kinds of things. And I think there's a certain amount of just being honest with people about some of that. Don't tell them it's better for them if it's not better for them. Yeah. And, like I continually am hearing we want more learners who are more self directed or this or that or the other thing.

 If you come from a really high compliance environment, you're essentially the message from a lot of these compliance environments is just do what you're told. Don't question it. Don't try to circumvent it. Just do what you're told. If we want, learner autonomy and more kind of learner motivation and having people taking control of their own stuff, then you need to figure out how do you balance some of those kinds of [00:17:00] things?

Because people learn what the subtext tells them. One of the people I interviewed for the book was somebody named Christian Hunt, who has a book called humanizing rules is I think the title of it. And he talks a lot about compliance measures and about how one of the things we're learning from compliance training is how to ignore training while you're taking it.

 And how do we train people to ignore the stuff that we're putting in front of them? And so a lot of this, like I said, if you have a trust relationship with your audience then they will be very patient with you through a lot of stuff.

 If they trust that you understand what their jobs are really like, and if they trust that you are trying to, even if you can't always make their job easier, that you're trying to find ways to support them in their work, or they trust that you've actually paid attention to what they know and what they don't know, and you're not making them learn a bunch of stuff that they already know.

There's a whole bunch of ways that you can break that trust with the learners. And that's a big piece of whether or not they're going to engage with any of the solutions that you're designing for them is whether they trust what you're talking [00:18:00] about, that you have their best interests at heart that you're paying attention to them.

I hear a lot about well, some of the old timers are the worst and I'm like, well, a lot of times these old timers, they may be doing things because they've got habits. That have become ingrained over time, and that's not in compliance with new regulations or new guidelines, but also a lot of this training talks down to them like they're dumb and they're not.

They're the people who know the most about these jobs. That doesn't mean they aren't still doing some behaviors that we find problematic, but it does mean that if you're talking to them the way that. Doesn't show that you respect the level of knowledge that they have about their jobs and them ignoring what you're saying is probably a really logical outcome for that.

So all of this kind of goes back to what I was saying about, you need to be in conversation with them in order to be talking to them in a way that they're going to care about and want to pay attention to and want to engage with.

Tom Griffiths: And, we've come back to it a couple of times, but spending that extra upfront effort to really understand the context and the learner isn't just [00:19:00] about designing a more effective intervention as a result, but it helps you to sell it. To the people that are receiving it because they feel like you understand them better and the trust is greater.

It's a great point. 

Julie Dirksen: And I think sometimes people believe that this is about being nicer or being, oh, I don't know. Any number of adjectives I'm sure could fit into that, but, one of the things we did a research report for the learning guild a while back on augmented and virtual reality interventions around the behavior change space.

And I was working with Cindy Plunkett, who did some of her doctoral research on empathy building and healthcare specifically around patients with Alzheimer's or dementia and things like that. And one of the things I was like, I was saying, Cindy, I think you need to explain to people why the empathy piece is important.

And she's we need to explain to them that empathy is important. I'm like no, you need to explain to them that empathetic care leads to better clinical outcomes. We're not doing it because it's nice. That, that is a reason. And it's a legitimate reason to do something. You might want to do it because it's nice, but also.

More empathetic care improves [00:20:00] clinical outcomes. There's reasons why we need to connect these dots that aren't just it, it needs to be nicer, quote unquote. . . 

Tom Griffiths: Makes sense. And so we mentioned a bit of technology there in vr, but it, we in 2023, so I have to ask you the AI question.

How do you envisage AI transforming the role of the practitioner in behavioral change design? 

Julie Dirksen: Yeah. bit curmudgeonly about the AI stuff. I still haven't seen what I think is, I know it'll get there. I haven't quite managed to reconcile in my own head, some of the ethical and privacy issues with a lot of the large language models and things like that.

But but I do think there's a couple of things that are really interesting for our purposes. 1 is just the kind of clearing out of some of the basic work, right? Early in my career, I spent a lot of time writing out line by line, the details of how to do stuff in software applications for software training, simulation development.

And I took a bunch of screenshots of every, click the down arrow, take another [00:21:00] screenshot, highlight the thing, take another screenshot, like it did a lot of that. And I will happily hand all of that work over to AI. That is just fine. So a lot of the low hanging fruit and Instructional design or instructional technology of things like software training and stuff like that.

That is probably going to go away. And honestly, I don't, I'm not going to miss it. Personally. I have a resolution ever to do software training ever again. Yeah, but I understand not everybody's in that position, but it does mean that the problems that are going to be left over in the same way that health care is cleared out some of the big challenges around through.

Surgery or medication or whatever, and we're left with a lot of the behavioral challenges. I think the same thing is going to happen in a lot of learning and development environments is that the creation of the simple procedural training is going to be quick and easy to build out because of AI.

And so then we're going to be left with some of these harder, more interesting kind of chewier challenges around behavior change. So I think that's part of it is that it's once we clear out the, that stuff, We're going to have the know what to do, but they still [00:22:00] aren't doing it, stuff is still going to be there.

And so being able to have skill sets around that. The other piece that I'm really interested in, and I think it's going to be really interesting to see is the ability of some of these models to do feedback mechanisms in digital learning environments, because we've focused so heavily on scaling content.

So how do I take this video and disseminate it to lots and lots of people? Okay. And most of the instructional technology in the world tells you that the basic unit of learning is a piece of content. I think we would have much more interesting learning technology if we viewed the basic unit of learning as a learner action with feedback, but the feedback mechanisms that we've been able to do with computers has been, they're pretty stupid, right?

Like it's multiple choice questions, mostly, you pick me. That was wrong. Here's why, and the thing about the feedback models that can start to happen with some of the AI stuff in terms of here, what are you going to say to this customer? Type your answer. Oh, okay. Well, here's a whole kind of analysis of the answer that you typed with some suggestions of [00:23:00] how to improve it.

That's far more interesting than you pick B and B is wrong and C is better. So, the ability to do much more subtle feedback on actual learner generated answers. So I think that has the potential to be really.

Transformative with how we do online learning or technology based learning. It remains to be seen if it will work out that way or not. I don't know. 

Tom Griffiths: Yeah, I totally agree. It's a great area to highlight. And I've also seen alongside the text based evaluation live video evaluation as well of how you're performing in the moment.

So for a speaker in a meeting, they can. Real time feedback to suggest that they draw another quieter members of the room, for example. Or if they're going through a role play, they can analyze the vocal track or even recognize their facial expressions to see if they were being sincere.

So, some of that's here, some of it's coming, but I think it's a great point that you make around, feedback loop that's more immediate that can drive better learning outcomes. [00:24:00] All right. So we've come to the end of our part two conversation. Just wanted to wrap up with our rapid fire round for our audience of learning leaders out there.

Given the current trends in the industry, we want to say what do you think they should start, stop, continue? So what do you think the average learning leader right now should start doing that they're not doing already? 

Julie Dirksen: I'm going to lean back on my answer because reinforcing it is not a bad thing.

They really need to be engaging more with the audiences or encouraging their staff to engage more with their audiences and find out what's really important or what's really going on with them. 

Tom Griffiths: Totally agree for many reasons that we've talked about. That's great. What should they stop doing?

Julie Dirksen: Stop insisting that the, again, I'm going to lean on an answer. I've already given, but stop insisting that the only legitimate form of evaluation is derived from their entire audience.

And look at some smaller, cheaper ways to get more feedback into the things. It could be follow up interviews. It could be Brinkerhoff success case model is a nice one where they do a survey and then look at [00:25:00] people who have adopted and people who haven't adopted and do some targeted interviews with those. Stop being quite so focused on quantitative and make sure that you're doing some qualitative. Thanks.

Tom Griffiths: And what are they doing already that they should continue doing?

Julie Dirksen: It's an interesting thing where we get a little bit focused on all the new shiny things and don't necessarily appreciate the hard work that we're already doing in some of this.

And so I do think that there's some value in things like content production. I think you figure out a lot of stuff while you go through the act. Content production. So the idea that the AI is just going to generate it all for you. As opposed to, we're going to have to work through these things ourselves and figure it out.

2023 was a weird year to write a book because chat GPT dropped, and I will tell you that I don't always know what I think about a thing until I've worked through the act of writing about it, or creating presentation about it, or really working through it.

And so the idea that we're just going to magically create content, and we're not going [00:26:00] to be doing that sort of effortful process around it I think is not the whole picture, and so still placing value on that, I think, is something that's useful. 

Tom Griffiths: Totally agree. Well, thank you, Julie.

I really appreciated our conversation. I learned a ton. There are so many studies referenced that we'll try and include some of those in the show notes. I think you just have a real talent for being able to take in so many different studies and theories and distill them into really practical, useful Ways of working for learning designers.

So that's what you do in Talk to the Elephant. Really love the book. Thanks so much for speaking to us today. 

Julie Dirksen: Yeah, thank you for having me.

Tom Griffiths: Thanks for listening to Learning Works. If you've enjoyed today's conversation, we encourage you to subscribe to the podcast for our exciting lineup of future episodes. Learning Works is presented by Hone. Hone helps busy L& D leaders easily scale power skills training through tech powered live cohort learning experiences that drive real ROI and lasting [00:27:00] behavior change.

If you want even more resources, you can head to our website, honehq. com. That's H O N E H Q dot com for upcoming workshops, articles, and to learn more about Hone.

Tom Griffiths: This is Learning Works, a podcast presented by Hone. It's a series of in depth conversations with L&D experts, HR leaders, and executives on how they've built game changing learning and development strategies, unleashed business growth through talent development, and scaled their global L& D teams. Tune in for the wisdom and actionable insights, the best in the industry.

I'm Tom Griffiths, CEO of Hone. Welcome to Learning Works. 

we're back with Julie Dirksen. We've talked about her next book on learning design, talking to the elephant, and now we're really going to get practical in terms of how you can use some of the behavior change design principles in your organization. So Julie we said in part one, just how you often think of your work as entry points to a certain field. And so, how would you recommend those in the L& D field looking to [00:01:00] start implementing behavior change design for the first time? What would you give them as foundational advice?

Julie Dirksen: One of the, I think the biggest challenges Is anytime that you're dealing with one of these behaviors, the, they know what to do, but they still aren't doing it.

There's going to be a research question to it. Right? You do need to talk to your users. I pretty regularly will ask groups of people if I'm speaking to them and stuff, how many people get to spend time with their people in their user audience really regularly. And I don't think I've ever gotten above 50 percent in terms of respondents on that.

And that's something that probably is going to have to change. I'm a big fan of job shadowing as a simple method into that go follow people around for a day. A lot of times we hear from people that they're not allowed to pull people off the floor, and especially if you're dealing with, say, medical professionals or something like that, their time is so valuable.

It's really hard to get them in for interviews or get them to respond to surveys or things like that. But finding out if you can go follow people around, because you can always slide questions into the, [00:02:00] conversation, and you get to see a lot about their environments. When you do job shadowing, you get to see where the limitations are and things like that, because we always have beliefs about why the behavior is not happening.

And sometimes, sometimes they're right. Sometimes the subject matter experts know the real reasons why the behavior isn't happening, but there is the issue of, okay, really? Why really? Why isn't the behavior happening? So there's that piece of it is understanding the world in the context that your learners are operating in.

And then the other piece about it is you can't talk to the elephant unless, you know, what the elephant cares about. Because we've often created these incentive systems where people will figure out how to get the incentive, in the fastest and easiest way possible. So, a lot of professional organizations will have a required amount of training that you have to do every year.

And I was working with 1 that I think it was, they were doing like accounting. Continuing education for accountants and somebody told me a story about somebody sitting in the back of the room where they'd open up 2 laptops. They'd be in a training [00:03:00] class live physical training class, and then they'd open up 2 laptops and be going through 2 e learning programs at the same time that they're in this physical classroom.

So, by the end of the day, they could potentially have 24 of their 40 required hours knocked out. Well, you on 1 hand, they're being very efficient about hitting the end goal of 40 hours of a continuous education. On the other hand, they're probably not accomplishing some of the higher order things that we were hoping for from that.

But we do need to understand if people are skipping the behavior, there's probably a reason why they're skipping the behavior. And that reason, even if it's not something that we officially are going to condone, it's super important to understand what it is. And understand, what they care about, because if you formulate any kind of learning that's going to talk to people about why they should do this behavior, if you're not framing it in the terms of things that they care about, or goals that they have, or problems that they need to solve, it's not going to get up into those top five of [00:04:00] priorities.

So, if you don't know what your audience cares about then you can't design effectively for it.

Tom Griffiths: And as we were saying earlier, that's just so contextual and situational that you need to go and have firsthand observation of that, so you can problem solve and debug the puzzle, see what is actually getting in the way rather than adopting some universal theoretical principles.

 I guess that naturally then is going to take a bit more time than quickly dashing off some learning experience based on some principles that you've learned along the way. You actually need to go and do the, shadowing exercise. And so. An L& D professional who's taking that more robust approach will need to justify, a longer or more expensive development process.

So how have you seen or how would you recommend L& D professionals communicate the importance of behavioral change design to their organizations so that they can justify a more robust design process like that?

Julie Dirksen: I think some of the accounting that I think that everybody should look at is what's the cost of an ineffective [00:05:00] solution.

Because typically we'll look at the development cost of the thing. So this project is going to cost, we want to farm it out to a vendor and it's going to cost 75, 000 to build out or, whatever it is, or it's going to take this many hours of our L and D teams effort. And it's going to add up to a certain dollar amount to develop it.

That's the cost that we tend to focus on, and there's been a huge push in learning and development about doing faster and cheaper. And, everybody can create courses and rise without needing a developer and, all that kind of stuff. But one of the costs that I don't think we look at is, okay, if I'm pulling 5000 people away for an hour to do this e learning course.

 And it's not effective at making the behavior change that we want to have. Well, each of those people, if they're 50 an hour, I think that's now oh, shoot, I should have done the math ahead of time. What is that? It's a lot of money. Yeah. Yeah, it's a lot of money in lost productivity of the people taking the course.

And if it's not solving the problem that we're actually trying to solve. So then now we've [00:06:00] wasted the development time and money on it, but we've also wasted a huge amount of productivity time in our learner base. And now we've also got the opportunity cost of whatever cost benefit we would have seen from them actually making the behavior changes also lost to us at that point.

So, if it was going to be like, they were going to serve more customers, or they were going to make more sales, or they were going to have less, customer service calls, or, whatever the benefit that the training would have provided. And so 1 of the things that I think we don't look at very clearly in a lot of these kinds of things is that the development cost is just the tip of the iceberg on a whole slew of other costs that are associated with ineffective solutions, and once you start to do the math on that spending a couple of weeks on actually making sure what problem you're trying to solve and doing some pilot testing or doing some prototype or an iterative testing around, making sure that the solution is actually going to make a dent in the problem.

Those costs really are. Usually fairly small in comparison to the potential [00:07:00] larger costs of, wasted time for all of your learners. Plus the opportunity costs of not making the effective change. Now, can you sell that in your organization? Maybe you can, maybe you can't, but I do think it's a conversation that we should be having because we have a tendency to focus.

I think very much on the wrong costs. We've been very focused as an industry on doing the same thing, but faster and cheaper, rather than doing more effective things that have better impact throughout.

Tom Griffiths: And so perhaps for 5 percent or 10 percent more total investment you could move from like a 50 percent impact to a hundred percent impact.

And it's quite the incremental or marginal ROI on that time. It's a great way to think about it. Of course, embedded in that is. An assumption that some ways we can measure effectiveness or the extent of behavior change. And so this is a topic close to our hearts at Hone. We spend a lot of time thinking about it, but we'd love to get your thoughts on measurement strategies to be most persuasive [00:08:00] or accurate or believable when it comes to measuring behavior change design.

Julie Dirksen: One of the challenges with this, too, and one of the kind of foundational things that you do need to start with whenever we're looking at these behavior changes is to ensure that you do actually have behaviors. If an outcome is more increased client satisfaction, then you can get into what behaviors are going to increased.

Client satisfaction. And sometimes you'll see stuff like, well, we just want people to be more customer focused. I'm like, okay, great. If you're being more customer focused, what does that actually look like? Like literally the question I often ask is if I take a picture or a video of somebody being customer focused, what do I see on this video?

And then they start to be like, well, it's like this, or they're asking questions or they're making sure that they're answering the whole question while the customer's on the phone or, whatever the behaviors are, but that often we're dealing with behaviors that are vague or dealing with outcomes instead of behaviors, any of those kinds of things.

And so the upfront work of making sure that you really [00:09:00] actually have defined these behaviors and we can do certain behaviors that may or may not effect the outcome. So, for example, if the outcome I want is I want people to have lowered blood pressure, behaviors could include increasing exercise or reducing sodium consumption in their diet, or taking their medication regularly. And some of those things may or may not have an impact, right? Sodium consumption is going to impact people who have sodium sensitive blood pressure, but not necessarily everybody does.

And so they could reduce their sodium and still not have an impact on their blood pressure. And so one of the things we do need to decide is, are we holding people accountable for the outcome or for the behavior? So, for example, if you're dealing with like salespeople, their outcome is going to be sales numbers, but the behaviors are going to be things like, doing more consultative selling kinds of things, asking customers about their current set of problems and needs spending more time listening to what they tell them, those kinds of things.

And so. The people can control their behaviors. They can't necessarily [00:10:00] control their out the outcomes from those behaviors. So there's a good argument for saying you should be holding people accountable for doing the behaviors. And if the behaviors aren't producing outcomes, that's your problem is management or the people doing this, the strategy level.

But the nice thing about if we're focused on behaviors and we define them well, is that it does at least get a little bit easier to be able to measure or see if they're happening. And it depends. Some are easier to measure than others. But you will have a very difficult time measuring anything, if you haven't defined those behaviors quite clearly.

Tom Griffiths: Yeah, totally agree. It's the level three in the Kirkpatrick scale versus the level four.

Julie Dirksen: The other thing that I would say about that is one of the things that I do think it comes from that sort of Kirkpatrick view or, the world is we're a little bit, More focused on measuring for an entire audience than I would like to see sometimes.

I think there's a lot to be said for small scale testing and or cohort measurement. So I can't afford to do a really detailed behavioral measurement of my entire audience of 12, 000 nurses, [00:11:00] but can I reasonably create some behavioral measures? And I'm going to try it out with 80 nurses. And actually gather some data about the efficacy of my intervention, even if it isn't for the whole populations, there's still something that I can do and then feed that back into the design process to improve the innovation that we roll out to the much larger audience.

And I think that's something I don't see happening nearly as often as I'd like to see in learning and development where we really do. Pilot testing with good measures, even if I can't afford to do it for everybody, can I do a pilot and really capture some measures and then use that to inform what gets rolled out to the whole population?

Tom Griffiths: Yeah, I love that. And yeah, especially in large populations where it's impossible to continually. Check everyone. You can have confidence that if the test population is representative of the broader population, and that was effective, then you're having a widespread effect when you roll it out. Take that then and map it to perhaps [00:12:00] knowledge work, if we call it that a lot of our listeners would be in technology or mid sized organizations that are less kind of frontline worker style, training environments, what would be some of the ways that you could measure or observe behaviors in that email, Slack, Google doc based environment? What have you seen done well there? 

Julie Dirksen: I think some of the stuff that Microsoft is telling us that we'll be able to do in terms of gathering data from like every Word document that your employees use.

I'm a little bit scared of it. I'll be honest with you. And I do think that when we start to get into some of these things having some ethical guidelines that we're using to consider this starts to become a really important question. I do think it's very important that whatever you're using to evaluate is something that people know about.

And they can understand what behavior they're trying to aim for and stuff like that. I will tell you I'm not the expert, the harvesting large quantities of those data, but that is going to be an increasingly, interesting way that [00:13:00] we can look at this.

Places where people are already measuring some sort of thing customer service issues are always a nice 1 because there's usually some kind of tracking system around those customer service issues. And so if we make some changes here from a training point of view, are we seeing, better resolution after, single call, or are we seeing less customer service issues coming all the way through based on some of those kinds of things. And so we're wherever we can find those measures that tell us a little bit about it. Kathy Moore has a nice one that she asks at the beginning of a project, which is what thing that you are already measuring will be impacted if this training works the way that we want it to.

And the answer to that question is. Sometimes there isn't one, and at least then you can start the conversation there of, okay, if you're not already collecting this data anywhere, how are we going to know if we've moved the needle on this via the training or via the behaviors that people learn from the training.

Right. So I think I just danced around that topic, but you get the idea.

Tom Griffiths: [00:14:00] No, totally. And I think one of the data sources to your point that people are already measuring in these contexts oftentimes is an engagement survey. So if there are particular manager scores or behaviors or cultural measures of inclusion that they're already measuring, then interventions perhaps on a subpopulation to really see the impact first could be a good way to start at least.

Julie Dirksen: There's a question sometimes that therapists use that I always like to, which is this sort of, if you wake up, if it's magically all fixed, how would you know? So they'll be working with a client and the client will be like, well, I want to, fix this thing about my relationships or, fix this about my life or whatever it is. And so then the magical question is, okay, well, if you woke up tomorrow morning, what would be the first indicator in the world where you would know? Oh, look, that's better now. And then you can identify a series of oh, well, I'd wake up and I would feel better or I'd wake up and I'd get out of bed and I'd, actually get out of bed on time and I'd make myself some breakfast or, whatever it is.

And then you can [00:15:00] actually shift the focus of, all right, well, how do we help you do those things that you're saying are representative of what the better world looks like, or what are those indicators? How can we focus in on that? I think there's something nice there around well, if we did have a more customer focused, Salesforce, what would be some of the things that would happen?

Would it be we'd hear more. Positive things back from customers would it be interactions client meetings? What would be the indicators that tell us that the thing we want to have happening is happening? And how can we lean into some of those? 

Tom Griffiths: Yeah, that's really nice.

And it allows for, I think, some qualitative measures to make it make their way into the evaluation as well as just the raw quantitative. Love that question. So, ROI and proving, The quantitative impact of training is a way to get buy in from management. But what would you say are some ways that you need to think about getting buy in from the learners themselves [00:16:00] when it comes to, the behavioral change approaches that we've been discussing?

Julie Dirksen: Everybody can sniff out inauthenticity. The minute that they see it, like they know when it's not a thing. And, sometimes you're asking people to change behaviors that are going to have no clear benefit to them. There's a broader organizational reason why there.

We need to change the way that you do client input or, these kinds of things. And I think there's a certain amount of just being honest with people about some of that. Don't tell them it's better for them if it's not better for them. Yeah. And, like I continually am hearing we want more learners who are more self directed or this or that or the other thing.

 If you come from a really high compliance environment, you're essentially the message from a lot of these compliance environments is just do what you're told. Don't question it. Don't try to circumvent it. Just do what you're told. If we want, learner autonomy and more kind of learner motivation and having people taking control of their own stuff, then you need to figure out how do you balance some of those kinds of [00:17:00] things?

Because people learn what the subtext tells them. One of the people I interviewed for the book was somebody named Christian Hunt, who has a book called humanizing rules is I think the title of it. And he talks a lot about compliance measures and about how one of the things we're learning from compliance training is how to ignore training while you're taking it.

 And how do we train people to ignore the stuff that we're putting in front of them? And so a lot of this, like I said, if you have a trust relationship with your audience then they will be very patient with you through a lot of stuff.

 If they trust that you understand what their jobs are really like, and if they trust that you are trying to, even if you can't always make their job easier, that you're trying to find ways to support them in their work, or they trust that you've actually paid attention to what they know and what they don't know, and you're not making them learn a bunch of stuff that they already know.

There's a whole bunch of ways that you can break that trust with the learners. And that's a big piece of whether or not they're going to engage with any of the solutions that you're designing for them is whether they trust what you're talking [00:18:00] about, that you have their best interests at heart that you're paying attention to them.

I hear a lot about well, some of the old timers are the worst and I'm like, well, a lot of times these old timers, they may be doing things because they've got habits. That have become ingrained over time, and that's not in compliance with new regulations or new guidelines, but also a lot of this training talks down to them like they're dumb and they're not.

They're the people who know the most about these jobs. That doesn't mean they aren't still doing some behaviors that we find problematic, but it does mean that if you're talking to them the way that. Doesn't show that you respect the level of knowledge that they have about their jobs and them ignoring what you're saying is probably a really logical outcome for that.

So all of this kind of goes back to what I was saying about, you need to be in conversation with them in order to be talking to them in a way that they're going to care about and want to pay attention to and want to engage with.

Tom Griffiths: And, we've come back to it a couple of times, but spending that extra upfront effort to really understand the context and the learner isn't just [00:19:00] about designing a more effective intervention as a result, but it helps you to sell it. To the people that are receiving it because they feel like you understand them better and the trust is greater.

It's a great point. 

Julie Dirksen: And I think sometimes people believe that this is about being nicer or being, oh, I don't know. Any number of adjectives I'm sure could fit into that, but, one of the things we did a research report for the learning guild a while back on augmented and virtual reality interventions around the behavior change space.

And I was working with Cindy Plunkett, who did some of her doctoral research on empathy building and healthcare specifically around patients with Alzheimer's or dementia and things like that. And one of the things I was like, I was saying, Cindy, I think you need to explain to people why the empathy piece is important.

And she's we need to explain to them that empathy is important. I'm like no, you need to explain to them that empathetic care leads to better clinical outcomes. We're not doing it because it's nice. That, that is a reason. And it's a legitimate reason to do something. You might want to do it because it's nice, but also.

More empathetic care improves [00:20:00] clinical outcomes. There's reasons why we need to connect these dots that aren't just it, it needs to be nicer, quote unquote. . . 

Tom Griffiths: Makes sense. And so we mentioned a bit of technology there in vr, but it, we in 2023, so I have to ask you the AI question.

How do you envisage AI transforming the role of the practitioner in behavioral change design? 

Julie Dirksen: Yeah. bit curmudgeonly about the AI stuff. I still haven't seen what I think is, I know it'll get there. I haven't quite managed to reconcile in my own head, some of the ethical and privacy issues with a lot of the large language models and things like that.

But but I do think there's a couple of things that are really interesting for our purposes. 1 is just the kind of clearing out of some of the basic work, right? Early in my career, I spent a lot of time writing out line by line, the details of how to do stuff in software applications for software training, simulation development.

And I took a bunch of screenshots of every, click the down arrow, take another [00:21:00] screenshot, highlight the thing, take another screenshot, like it did a lot of that. And I will happily hand all of that work over to AI. That is just fine. So a lot of the low hanging fruit and Instructional design or instructional technology of things like software training and stuff like that.

That is probably going to go away. And honestly, I don't, I'm not going to miss it. Personally. I have a resolution ever to do software training ever again. Yeah, but I understand not everybody's in that position, but it does mean that the problems that are going to be left over in the same way that health care is cleared out some of the big challenges around through.

Surgery or medication or whatever, and we're left with a lot of the behavioral challenges. I think the same thing is going to happen in a lot of learning and development environments is that the creation of the simple procedural training is going to be quick and easy to build out because of AI.

And so then we're going to be left with some of these harder, more interesting kind of chewier challenges around behavior change. So I think that's part of it is that it's once we clear out the, that stuff, We're going to have the know what to do, but they still [00:22:00] aren't doing it, stuff is still going to be there.

And so being able to have skill sets around that. The other piece that I'm really interested in, and I think it's going to be really interesting to see is the ability of some of these models to do feedback mechanisms in digital learning environments, because we've focused so heavily on scaling content.

So how do I take this video and disseminate it to lots and lots of people? Okay. And most of the instructional technology in the world tells you that the basic unit of learning is a piece of content. I think we would have much more interesting learning technology if we viewed the basic unit of learning as a learner action with feedback, but the feedback mechanisms that we've been able to do with computers has been, they're pretty stupid, right?

Like it's multiple choice questions, mostly, you pick me. That was wrong. Here's why, and the thing about the feedback models that can start to happen with some of the AI stuff in terms of here, what are you going to say to this customer? Type your answer. Oh, okay. Well, here's a whole kind of analysis of the answer that you typed with some suggestions of [00:23:00] how to improve it.

That's far more interesting than you pick B and B is wrong and C is better. So, the ability to do much more subtle feedback on actual learner generated answers. So I think that has the potential to be really.

Transformative with how we do online learning or technology based learning. It remains to be seen if it will work out that way or not. I don't know. 

Tom Griffiths: Yeah, I totally agree. It's a great area to highlight. And I've also seen alongside the text based evaluation live video evaluation as well of how you're performing in the moment.

So for a speaker in a meeting, they can. Real time feedback to suggest that they draw another quieter members of the room, for example. Or if they're going through a role play, they can analyze the vocal track or even recognize their facial expressions to see if they were being sincere.

So, some of that's here, some of it's coming, but I think it's a great point that you make around, feedback loop that's more immediate that can drive better learning outcomes. [00:24:00] All right. So we've come to the end of our part two conversation. Just wanted to wrap up with our rapid fire round for our audience of learning leaders out there.

Given the current trends in the industry, we want to say what do you think they should start, stop, continue? So what do you think the average learning leader right now should start doing that they're not doing already? 

Julie Dirksen: I'm going to lean back on my answer because reinforcing it is not a bad thing.

They really need to be engaging more with the audiences or encouraging their staff to engage more with their audiences and find out what's really important or what's really going on with them. 

Tom Griffiths: Totally agree for many reasons that we've talked about. That's great. What should they stop doing?

Julie Dirksen: Stop insisting that the, again, I'm going to lean on an answer. I've already given, but stop insisting that the only legitimate form of evaluation is derived from their entire audience.

And look at some smaller, cheaper ways to get more feedback into the things. It could be follow up interviews. It could be Brinkerhoff success case model is a nice one where they do a survey and then look at [00:25:00] people who have adopted and people who haven't adopted and do some targeted interviews with those. Stop being quite so focused on quantitative and make sure that you're doing some qualitative. Thanks.

Tom Griffiths: And what are they doing already that they should continue doing?

Julie Dirksen: It's an interesting thing where we get a little bit focused on all the new shiny things and don't necessarily appreciate the hard work that we're already doing in some of this.

And so I do think that there's some value in things like content production. I think you figure out a lot of stuff while you go through the act. Content production. So the idea that the AI is just going to generate it all for you. As opposed to, we're going to have to work through these things ourselves and figure it out.

2023 was a weird year to write a book because chat GPT dropped, and I will tell you that I don't always know what I think about a thing until I've worked through the act of writing about it, or creating presentation about it, or really working through it.

And so the idea that we're just going to magically create content, and we're not going [00:26:00] to be doing that sort of effortful process around it I think is not the whole picture, and so still placing value on that, I think, is something that's useful. 

Tom Griffiths: Totally agree. Well, thank you, Julie.

I really appreciated our conversation. I learned a ton. There are so many studies referenced that we'll try and include some of those in the show notes. I think you just have a real talent for being able to take in so many different studies and theories and distill them into really practical, useful Ways of working for learning designers.

So that's what you do in Talk to the Elephant. Really love the book. Thanks so much for speaking to us today. 

Julie Dirksen: Yeah, thank you for having me.

Tom Griffiths: Thanks for listening to Learning Works. If you've enjoyed today's conversation, we encourage you to subscribe to the podcast for our exciting lineup of future episodes. Learning Works is presented by Hone. Hone helps busy L& D leaders easily scale power skills training through tech powered live cohort learning experiences that drive real ROI and lasting [00:27:00] behavior change.

If you want even more resources, you can head to our website, honehq. com. That's H O N E H Q dot com for upcoming workshops, articles, and to learn more about Hone.

Develop universal skills and drive growth with practical, real-world insights.