Anu Patel:
I'm thrilled to introduce our speaker for today's session, Caelan, our senior learning strategist here at Hone. Hi Caelan. Thank you for being here. And I'm gonna pass it over to you.
Caelan Kaylegian:
Amazing. Thank you so much for the introduction. So excited again to see where everyone is calling in from and kinda all of the feelings and relationships that we have with AI right now. So we're gonna be talking about how that's showing up in our efforts to drive our organizations from AI awareness to AI adoption today.
Talk a little bit about why I am here. Here in front of you today. A little bit about me. I am an IO psychologist. Drop, drop an IO in the chat if you're also an IO psychologist. I think it's one of the most poorly branded professions in this space. Really just meaning that we are workplace psychologists.
I'm also a facilitator, so Anu said, be prepared for some audience participation. I'm gonna be coming into the chat a lot for that. Love seeing my fellow iOS. Hello. I'm also a pet parent. I promise that will be relevant later into this conversation. You can drop a dog cat emoji in the chat.
Horse pig. I don't know what kind of animals you all got out there. If you're a fellow pet parent. And most importantly I think to this conversation is I'm a fellow people professional, right? So you'll notice I'm not listing AI expert anywhere in my credentials here. I think like many of you joining this call today, I am here as a professional kind of figuring out how AI fits into what I do as a people person.
And I think. We're at this really unique moment in time where all of us in this space are being expected to support our employees in building skills for a technology that we ourselves are still figuring out. And we're being asked to do it at pace, right? At a very quick clip. But I think what's great is that we've all been here before, right?
For us in the people space, this is not the first time we've been tasked with building a plane while flying it or, whatever kind of corporate buzz phrase we wanna use. I do think with AI though, it feels a little different, at least for me. I often feel like I am building a plane with materials that haven't been invented yet, and every time I go into my workshop, somebody has swapped out my tools with something different.
So that starts to feel a little more unprecedented to me. Shakes me up a little bit more. But I think what I've anchored in recently, particularly over the last year and a half is this good news that our organizations don't need us to be AI experts at all. They need us to bring what we already have, which is we have expertise in people, our people in particular.
We have expertise in change management. We do it all the time, and we have expertise in driving behavior change, right? Now all we have to do is apply that to this new context. And so that's what we're really gonna be talking about today, is really why are we uniquely positioned to be the people driving this change?
Why us? What might get in the way? What are some of the real barriers that are keeping our organization stuck between awareness and adoption? And then more importantly, what can we do about it, right? So what are some of these tactical steps that we can take and start implementing right away? Wanna make sure you're leaving with a lot of really juicy good stuff today.
And then finally, we'll have some q and a to answer any things that I haven't gotten to. But ultimately, by the end of today's session, I want you to have a framework and some really concrete ideas that you can take back to your teams right away. All right? That's what we've got planned for today. So perhaps most importantly, and this comes back to being a pet parent, I wanna show you what any of this AI adoption has to do with adopting a pig.
Bear with me. I promise this is relevant. I presented on a very similar topic at the Learning Leadership Conference in Orlando this year. And as I was preparing for that talk and thinking about how do I make this really meaningful, how do I, have this really resonate? I really became aware of how similar my journey with AI adoption came to adopting this pig that you see on the screen.
So this is Olive. Olive came into my life a few years ago. I picked her up. Spur of the moment from animal control here in Las Vegas. You can imagine my husband was thrilled when I brought her home. I had no experience with pigs whatsoever. I was so far outside of my element, and I had a lot of misconceptions about what caring for her was gonna be like.
So in some ways I really do feel like AI is that for us too. It's this new unfamiliar creature that has suddenly been let loose in our workplaces and we don't know how to take care of it yet. We don't know how to manage it yet. We don't know how to help it thrive. So this story has a positive ending.
Two years later. This little pig is fully integrated into my life and into my family. But notably I didn't get there on my own right. There was a lot of learning. I need a lot of social support. There was a lot of trial and error involved in getting her to that place. But now we're locked in, really locked in.
Pigs live for 20 years. One of the things I didn't know before signing up adoption, but she's amazing and I'm excited for the long life we're gonna have together. But where that metaphor comes back to AI is, I think that's our job with AI is to help people move from this place of maybe terrified excitement and chaos around something new.
To really confident integration into our daily lives, right? This is not gonna happen overnight. It's gonna happen through learning and support continuously. And we're the ones who can help get our organizations there. So that, thank you, Kathleen. Hoping the analogy was landing. Glad we're getting some good food.
But we're gonna take this exotic thing and we're gonna make it normal. And so let's see where our organizations are in this journey. We're gonna look at a map of what maturity looks like over time. So I've thrown out a couple terms already that I think this model helps us to understand.
So I've talked about AI awareness, I've talked about AI adoption, I've thrown out AI enablement and this maturity model helps to put that into a little bit more context, right? On the far left, we have organizations that sit in this awareness stage, right? So they're curious. We're dipping our toes in the water.
We maybe are testing some things out, but there's no structure yet. If we're moving a couple steps over, you get to this place of active and operational. So this is where our pilot programs live, right? We might be testing out tools more formally, individual teams might be experimenting, but still there's nothing that's really being applied to.
The work itself maybe. And then as we start to move into systemic and transformational use, that's where I think we start to see true adoption. And that's what I mean by AI adoption, right? AI isn't a just a project anymore that we're trying to stand up. It's something that's being woven into our decision making, our culture, and just how work gets done in our organizations now.
And so I think this process of getting from point A to point B or moving up that maturity model is what I wanna call AI enablement, right? So what are the steps that we're taking to shape the skills that people need, the systems that people are using, the mindsets that people are adopting that make responsible and embedded AI use really possible.
So great model out of Gartner here. I do want to do a quick survey of the room and see where we think all of our organizations are sitting on this model, right? So I'm gonna have a new launch, a little poll for us. I want you to reflect on where is your organization in the AI maturity model.
So a, who's gonna launch that poll for us? You'll have a couple different options to choose from. So are we just curious in that awareness stage, are we more in that middle spot of experimentation or we're starting to apply it to real tasks and processes? Are we actually in this more mature space of scaling up, or are we really mature?
AI is the DNA of our business. It's baked into everything we're doing. You can't escape it, right? We're gonna let some of those answers roll in and let a news. Judgment. Tell us when we should publish, when we feel like we've got most of our answers in, and we'll see what we've got going on for our spread here.
So it is published and it looks like operational, active, and systematic are the ones that are the highest ranked at the moment.
Okay, amazing. Perfect. Yeah. So healthy mix but shows us that most of us are somewhere in the middle, right? So we're still experimenting, we're still learning, we're still trying to connect the dots.
Oh, fun. Now I can see that. Great. Thank you. We're still trying to connect the dots in a lot of ways, I think it's interesting to see, again, maybe 1% of us sitting at that transformational stage and less of us still at that awareness stage. But it is a spectrum. It is a continuum where different organizations are gonna fall, and right now there's no single right place to be.
Depends a lot on our industry. Depends a lot on our leadership. Depends a lot on budgets and funding and wherever we can get to there as well. So I think the key here is recognizing for us, where are we so that we as people, teams, and people leadership can decide what is the support that our people need next, right?
Based on this maturity model. So now that we have seen where we all fall specifically on this maturity model, I want us to talk a little bit more about why there's such a big gap or why so many of us get stuck in these middle stages, right? So let's get into some of the data. McKinsey ran a study this year.
That showed some really interesting things. So the first part being that 92% of executives are planning on increasing their AI spend this year, I don't think that surprises anybody. But the part that stood out to me the most is that only 1% of those leaders feel that their organizations are mature in deploying the tools that they've already spent money on.
And so what that tells me is that we have the tools, right? Access is not the issue. Funding is often not the issue, but we're getting this bottleneck in the way that we're deploying it, which often comes down to issues with our human capital. How are we managing the change of rolling out ai and what are the other efforts that we're making in the business to try to transform the way that work is done so that AI tools can work for us rather than against us?
So I think that's an interesting kind of first point. On the other side though, we have what's going on with employees. So at this point, our employees are often way ahead of our leadership. There's actually three times more likely to be experimenting with AI than their managers think. And so we can see that this bottom up adoption is actually outpacing the strategy from the top.
And because of that, nearly half of our employees are saying that they want training, but they feel unsupported at this time, and two thirds of their managers are already fielding AI questions that they're just not prepared to answer. They're learning alongside us too. So big picture, you have executives ready to invest.
You have employees who are raring to go, they're already experimenting, and you have managers who are overwhelmed. And so there's a lot of structural components that are missing that can provide support. And when that support is absent. That's where people start to find their own solutions and things start to break down, right?
So people are Googling things, they are playing with tools that maybe they're not approved to be using. They're sharing tips in Slack, and I think some of that can all sound good and fine at first when we're just trying to get people comfortable until we realize that it's gonna open up things like compliance issues.
Data security issues. We may have inconsistent standards for the work or the outputs that are being put in front of people. And there's potentially also too tons of wasted time and effort on tasks that are being allocated to AI that shouldn't be, or, people are spending time experimenting when we'd rather them be using it on things that are gonna be really impactful.
And that is really where I think we come in as people leaders, as learning leaders, we can be this bridge that helps to make sure that people are being enabled in ways where AI can work for us rather than against us. But I think. To fill this gap, we have to get started somewhere. And a lot of that is even just knowing what do we even mean by having AI capability, right?
I talk to so many leaders in a given week and the conversation around AI is usually, Hey, what's happening in your organization? What's your relationship with ai? What are some of the things you're trying to accomplish? And oftentimes what I get back initially is just, well, we need to get our people ready for ai.
And I think that a lot of us feel that way, but it's also saying, well, we need our people to be better leaders. We need leadership training. It's so true and also so vague that we can't really do a whole lot meaningfully to address it. So answers like that I think result in what I've heard called as the watering can problem.
And so that's where we're spraying out generic awareness training, hoping it sticks or in another Hone webinar. A great ai, ai thought leader, Marcus Bernhardt, if you haven't followed him, he's a great voice in the space. He said that we get stuck in prompt school, right? Where we're just trying to help get people better at prompting, and that's only one very narrow.
Part of AI literacy. So we do all of this hoping that these things are gonna stick and are gonna land, but they're not gonna move people up. That AI maturity model, right? We stay in this place of piloting and dabbling, and we're not getting people towards really confident and responsible adoption. So what you're seeing on this slide here.
It is a sample competency model. It's from the Business Higher Education Forum. There's a lot of proposed AI capability or competency models out there right now. If there's any other ones that you all are using from kind of verified sources, would love, if you have those resources, drop 'em in the chat.
I'm not saying that this is the best one. I just liked some very particular things from this model. One is that I love that it shows that there are highly technical skills that are still needed, right? We've got AI literacy on there, we've got data literacy on there but right alongside some very critical human skills, right?
You've got critical thinking, you've got collaboration, and you've got adaptability, right? And none of those are new competencies to us in this space, right? None of this is fresh and new. We've been advocating for the importance of these skills for decades as people in the people space, right? So what I think is new is how these skills show up in an AI powered workforce, right?
So we're talking about collaboration now. Collaborating means that we are collaborating, not just cross-functionally with people, but also with a very intelligent machine. So collaboration means we need to evaluate that machine's outputs. We need to be able to challenge its reasoning, and we need to be able to decide whether or not to trust or override this new kind of partner that we have in work.
So all of these human skills are skills that we, as people in the people space, need to continue to protect and to amplify. We're getting our day in the sun, Honestly, because these are some of those skills that are not gonna be commoditized by ai. And so we're not gonna compete with that technology.
It's about leaning more heavily into these human skills. And so that I think is where we have a lot of power. And where I'm gonna spend a lot of time today. And I think this is where we start, right? So before we can design any sort of intervention or enablement efforts, we first have to go to models like this and define what does AI capability look like in our organization or for different roles, where are we actually trying to get people and get really specific with some of those things, right?
And then once we know what good looks like, then we need a way to determine where are those gaps. That's our traditional training needs analysis and help get people there. I'm gonna take that a step further and we're gonna use a model borrowed from behavioral science called the calm B model, COM dash B.
And I think it's a really great tool for us to evaluate where we can. Step in as leaders in this space and help enable our organization. So drop in the chat. If you have heard of this framework before, it is not typically used in organizational development settings, but it has a lot of really amazing connections.
So again, just curious to see if anybody in the room with us has had a chance to leverage or use this model before. Excited if you have not heard it. I have really fallen in love with this model as a way for us to do really good work. So little bit of history. The combi model is a behavior change framework that came out of the public health research space.
So the researchers who were working on creating it were running into issues with getting people to adopt very specific behaviors that were very important for their health, right? We can think about things like adhering to your medication or promoting more physical activity and doing the things that your physical therapist is telling you to do.
So there are barriers and blockers to people being able to achieve that specific behavior. And so they wanted to come up with a way to identify what are the specific things that are getting in the ways of people doing that. And they synthesize a lot of research across behavioral science to come up with this model.
So what I think is great about even just this visual is that it shows that behavior is influenced by so many factors. And that behavior will only change if these three levers, capability, opportunity, and motivation are supported in really thoughtful ways, right? And so we, as people in organizational leadership, employee development, can use this model to diagnose why certain behaviors that we're trying to see in the workplace right now.
Things like AI adoption, AI fluency might not be worth it. So these three things together create the conditions for behaviors to be enacted or executed in the ways that we want them to be. So if we go back to Olive, as an example, the piglet, when I first brought her home, I didn't have the right skills to do it.
That's capability. My home and environment wasn't set up for her. That's opportunity. Actually came to find out my home was not properly zoned to have a pig, so had to come up with a different place for her to live entirely. Don't worry, she's got a really great setup. But talk about being unprepared.
And I was a mix of both. Excited and terrified, right? That's my motivation is I'm equal parts scared and excited. And so when those three things were not in alignment, it was really hard for me to be a really good pet parent. But as I got more competent in each of these areas, then that adoption of that behavior happened a little bit more naturally.
And the same thing goes for AI in our organizations, right? If we're not moving towards the behaviors that we wanna see in terms of AI adoption, it's usually because one of these three levers is stuck, right? And the solution is not always just more training, as we can see from this model already. And so our role becomes how do we diagnose which one it is, and then how do we design enablement efforts that release that lever so that people can be more successful, right? So we're gonna go through this kind of piece by piece. We're gonna start with capability. Capability has a couple extra components.
So the first one is physical capability. So this is just, do you have access to the things that you need, the tools, the internet, the technical resources to be able to use AI really effectively. So for a lot of organizations, it's has your organization even approved the use of AI tools for employees?
If your organization just approved copilot. We might be really early on in developing capability of our employees, right? We're excited to have it. We're maybe figuring out some of the limitations. And we're just getting started here. On the other side, we have psychological capability. This is more about the knowledge, the skills, and the judgment needed to engage in whatever behavior we want really properly.
So this is where training often comes into play, right? Have we trained on the skills that came from our AI competing model competency model that we reviewed earlier? Above and beyond just prompting. And so a lot of teams can get stuck right here, right? They might have access to the tools, but they haven't quite gotten people to know when it's appropriate to use it or how to tell if an output is reliable.
And so we actually have a lot of things that we can do to build that capability and practice. So here, I think it's important before I show some of these solutions, is that our job is not to make people AI experts, and particularly not to be AI experts overnight, right here, I think is about setting foundations for people in terms of like the basics.
So here's what AI can do, here's what it can't. More importantly, here's our company's stance on it and how you should be using it. And establishing that baseline knowledge. But then we also need to know that ju just knowledge or awareness is not enough, right? So our responsibility becomes more about cultivating the skills to actually do something with this.
And more particularly the judgment so that people know when and how to interact with ai. As an example, employees might know what AI is at this point, pretty. Pretty standard, but do they know how to use it responsibly? I love the study that just came out from BetterUp and Stanford about work swap.
I think that is the best terminology. If you haven't read that study just Google or on LinkedIn work Swap and it'll come right up. But it's a term they coined for AI outputs that look really polished on the outside. But they're useless when you get below the surface, right? We're seeing a rise in that type of work being submitted, internally in an organization or to clients.
And so when we think about capability, it's how do we make sure that people know when they can trust ai, when they can question it, and when they just need to override it in the work that they're doing, right? And so here we're not just teaching people how to navigate a platform, right? Anybody can learn how to use.
A specific UX or tool, but we're building those critical thinking skills needed to use AI really well. And so that looks like a couple different things. Some of those things might be just, again, depending on where we are in that AI maturity model. Thank you, Tess, for dropping in that BetterUp slack.
Appreciate that so much. But for some specific strategies for when we're really early on, that 7% of us who might be sitting in that AI awareness stage is just having some explainers, some myth busters that are breaking down some key concepts and making sure that people just have understood the basics.
Also having access to demos that let a, let them see what AI looks like on tasks that they might use kind of day to day, right? Help it to stop, feel really abstract and model and demonstrate some of those things. I think having support systems stood up and in place whatever is possible within your capacity, right?
It may be that you can have q and a office hours, or it may be that you just need to publish a quick playbook so that people aren't continuing to come to you or need live time with you, but it gives people a place to go when they have questions. I think we have a really amazing opportunity, and I'll show an example of this in a couple slides, but to introduce new tools like judgment rubrics.
So giving people a framework to walk through that says, can I trust this output? Is it something that I can run with right now, or do I need to go back and reprompt? Do I need to revise and add my human flare to whatever this output is? And then also opportunities for practice and also for collaborative work together, right?
I mentioned earlier collaboration is not just with people, now it's with a machine. And we need to figure out where does that make sense to collaborate with a tool and when does it make sense to collaborate with a person? Through all of these efforts, what we're really doing is moving just from this idea of AI to practice and application.
Which again, I think is what we do through a lot of our training efforts already. Would love to see in the chat how many of you have already spent time focusing on cultivating this capability in your organization? I would love if we could actually use it as a way to crowdsource some ideas.
So if there's things on here that you feel like worked really well. Or things that I didn't include on here that you were like, this was a huge win for us. I would love to see just in the chat what's working for you. Let's crowdsource crowd share some of those ideas. And while you're dropping those things in the chat, I'll highlight a customer that came, not one of our customers, but a company that came up in some of my research that's doing some really cool things.
Great Place to Work. Did a great highlight piece as well on a bunch of companies and how they are doing really amazing things to enable their organizations to better use ai. And one of the companies that stood out to me was PWC. It's actually gamified their AI curriculum, which I think is really fun. So they post live trivia and quizzes.
It's on things like standard content from their AI curriculum, but also on like firm strategy and position and direction around ai. And so employees all get to come together on these virtual monthly games and they can earn prizes, and it's a really fun, playful way for them to connect around something that's.
Again, entirely exotic and new to them and that people may have some fear around even. So I think this is a perfect example of how learning around this topic can be both engaging and social and not just another e-learning that people need to complete. And really fun to see that. That's a great way to gamify this.
I'm also gonna go into the chat and see what else we've got going on. Ooh, I love this idea of having a learner persona and some sort of self-assessment tool. Thank you, Claire. So that you have a choose your own adventure of where you're gonna go, maybe. Ooh, yes. Use case discussions. Yes.
Storytelling being so impactful at this stage. Moira, thank you for sharing that. There was guidelines rolled out in your organization, but no kind of accompanying training. I'm hearing that a lot, right? We have a, here's the rules to follow, but no additional guidance on how to do that.
Amazing. And thank you Nathan, for sharing this reflection that like we might actually be further down the maturity model than we thought, right? And so hopefully as we go through some of these other things, we'll be able to see again where maybe our, there are some additional gaps. I do know that there was some excitement in the chat around the judgment rubric, and so I do have one that we've designed as a sample here for you, just so that you can see what this quick win might look like.
I think the judgment rubrics are amazing because they help employees self-check their AI outputs right before they're using them. Ideally, we're building that critical thinking capability to the point where over time they're not needing this tool. It's becoming more automatic, but at this point we need some of that scaffolding, right?
And when we think about that fear of work slot, this can be a really great preventative measure. I think people do need to be trained how to use it and to be shown demonstrations of how to apply it. And so that is my caveat here, is it can't just be a tool that's rolled out without additional training as we talked about.
But here, basically, if they can say yes to each of these questions as they're reflecting right, then we feel like we can use the output confidently. If not, that's cue to us that something's gotta give, something needs to change, right? And we need to be able to provide people with options or opportunities for what they might do when they hit a roadblock, right?
So I'm gonna just take accuracy as an example. So for verifying accuracy, did they ask AI to cite their sources? And if not, I recently asked AI for an output and it gave me this outrageous statistic, and I was like, wow, fascinating. Where did you get that information? I was like, oh, I don't know. I just made it up and I was like, cool, can you not do that next time?
And so a learning moment for me to say, okay, in my prompting even. I need to be better about asking it to be research fact, right? Or I need to use a different tool that is known for deep research, right? So those are some ways that we can get people to, again, adjust their thinking about how they're approaching the use of ai, right?
Great tool here. We are gonna move forward into opportunity again. I think sometimes we get stuck in this loop of oh, well people just need more training. They just need more training. They just need more training. Not to say that we don't need training, but I think there are other areas of the combi model that are also really important for us to emphasize.
So let's move into opportunity. So opportunity is all about our environment, right? It's one thing for someone to have the skills and the knowledge on how to use ai, but if they don't have the time to use it or their leadership is discouraging experimentation or a workflow is making it really hard to get AI in there then it's not gonna stick, right?
We can have the best trainings in the world and people are not gonna be able to do anything about it. So our opportunity also has two dimensions, this physical component, which is those more kind of hard things, the processes, systems and workflows that make AI possible. And then I love this idea of social opportunity, right?
What are the cultural norms and the social supports that are in place or not in place to make sure that we can actually use ai. So here we're not talking about training anymore, we're talking about our ecosystem in the organization. And so again, we have some really cool opportunities, I think, in our functions to impact the opportunity so we can influence that ecosystem in a number of different ways.
I think it starts with infrastructure, right? So do we have learning hubs or AI centers of excellence? So that guidance is continually visible and evolving. I don't think it has to be. Fancy, I think sometimes even just like a shared folder if we're getting started or a Slack channel can help create that sense of anchoring.
When we move into things like workflow integration, I think that's where we can work really closely with teams like ops or tech to identify where are employees experiencing points of friction, where AI or automation could really help. And then where can we provide just in time resources so that people know when those solutions become available?
How can they use them really well in their jobs? Another place to do a little more tactical training and have playgrounds where people can do low stakes experiments and really have those clear boundaries, again, accompanying them around what's okay to input. And then again, I think having some sort of.
Cross-functional group, like a joint steering group that can stay on top of emerging practices. I think this is a really great way to bring our businesses together, cross-functionally to have influence from different voices on how we wanna deploy ai, how we're gonna upscale people around these things.
And all of these create opportunities that help to, again, I think the underlying theme here is how do we remove friction for people? How do we grant people permission and safety to use some of these things. So again, we'd love to see in the chat when we think about opportunity, how many of you have been able to focus some of your efforts or interventions on fostering more opportunity in your organization?
Again, can be some of these things that I've already listed, but also might be other things and strategies that are unique and different from what I've got on here. But would love to see us again, flood the chat with some of the opportunity solutions that we've been able to influence.
And again, while you're doing that, I'll highlight another case study that I found in my research which was through Mineral. This was highlighted in a SH RM case study, which was really amazing. And I think that they, what I loved about this is they found very quickly that e-learning courses or even internal things that they created on like how to use AI would become outdated so quickly, right?
Think about how often. A AI update is submitted, right? Even just the UX looks different after a couple days, right? So that became really hard for them sus to sustainably maintain. And so instead they began to rely on communities of learning that they dumped pods for really targeted experiments and this more like lab.
Type approach to learning, right? And what I loved is that they found people who were very experienced, AI evangelists in their organization who they dubbed explorers, who helped to lead those sessions, demystify the tools, provide coaching when AI was not being used in the way that it should be along, best practices.
And then they were able to continue those live sessions. Through this through line of Slack conversations to continue to follow up on what's working and what's not. Many of you mentioned that use case as being something that's really helpful, I think they did a really great job of leaning into both the social component of opportunity, but also the systems and the workflow component of weaving that into slack and things like that.
So take a quick peek in the chat and see where other people are using it. Yes, AI champions program. I love that. And then brainstorming meetings, that's also super fun, right? So there's no actual tactical thing. We're just coming up with ideas and possibilities as a really great way to get people bought in to what this all can look like in the organization.
Amazing. And so then the takeaway tool I wanna highlight for opportunity is this exercise that I actually did as a part of ATDs intensive on AI that I attended this past summer. So as a presenter in one session, her name is Olga gva, and she had us utilize these three buckets of automation allocate to gen ai or keep it human, right?
So we listed for our own jobs every task that we do in a single week, right? So from the quick, I got the quick admin stuff, like I gotta send an email to something more complex like I'm in these strategic meetings with stakeholders, right? After we brainstormed, excuse me. After we brainstormed all of those tasks, we needed to sort them into these buckets, following these guidelines, right?
So if it's something we can automate, that probably means it's repetitive rules based or high volume. We could maybe allocate to gen AI if it is gonna involve some sort of synthesis or creative generation. And we need to keep it human for tasks that are gonna demand high levels of empathy or precision or really sensitive judgment, right?
And so when I did this myself, I found that in my role, things like summarizing learner feedback and like qualitatively coding things were so easy to send to Gen ai. That was a great candidate for that. That freed up a ton of my time to do more strategic work, right? I would suggest Lauren, thank you so much for asking that question.
We actually ran this with the home team as well. We did it in an all hands exercise, but you could do it in a smaller, format as well. But it was a really great value add for a lot of teams who hadn't yet started to think about strategic integrations. For AI yet. And we're blocked, right?
You don't know what you don't know. And so when you say, oh, what tasks could you give to ai? If you don't have any experience with that, it's really challenging to know where to go with that. So I think this is a really great exercise that can be led. And then for us, our kind of joint steering comm community was able to go and meet with those teams and say, Hey, what did you come up with?
Where can we help you stand up solutions? Which I think is really powerful. There's some good follow up there that was done. I do see some questions coming in. I'm gonna hold onto those for now, but thank you all so much for submitting them. We'll get to them in q and a. I do want us to move into motivation.
This is our last component here. So if we think about opportunity as being what is gonna remove those external barriers to people being able to do things right? Motivation is going to be that kind of intrinsic value that, that spark and excitement we have about implementing AI in our roles. And that comes into forms just like all of our other ones.
Automatic is that gut level emotion. So the words that I had you share at the beginning of the call is their excitement, curiosity, resistance, fear. We've gotta be able to overcome those emotions. And then that reflective is more of our conscious belief systems, right? So do we feel like engaging with AI is worth it?
Does it fit with our values and our goals? And that's not always true for everybody, right? And so our role becomes about nurturing both that emotional safety component of it, but then also the intellectual buy-in that will get people to want to engage with AI in meaningful ways. And so a couple different ways that might look I think we're all often very good at these for other programs that we have in place, right?
This is our bread and butter. This is where we shine, right? So where are we recognizing and rewarding responsible use? How are we making that really public? Calling it out helping people to realize that when we use AI responsibly, it's not just, it's not risky well it can be, but within reason. It's highly valued.
Where are we? Spotlighting early adopters and telling really powerful stories of how ai is helping people do their jobs better. I know for me. One of the stories that I tell that seems to resonate a lot with people is I've used Hones AI practice tool. I've been in learning and development for 10 years, and I still get so nervous about delivering feedback even though I can't even tell you how many classes I've taken on it.
But are always on. AI practice tool allows me to go in anytime I need to have a feedback conversation. Run through it very quickly in five to six minutes, get immediate feedback on my tone and delivery and whether or not I checked all the boxes of the SBIW model. And then I feel much more confident going into those conversations.
And that's something that is always in my pocket now and is an amazing tool for me to continue learning. I think also these abilities to connect to things bigger than us, right? So connecting to the purpose of how do we get to spend more time on more strategic or more human parts of work, and making that really clear.
Then also thinking about how are we again, transforming or helping to transform the business with ai? So are we embedding AI fluency into our competency models? Is it a part of development plans? I know for me, I have an AI hour dedicated every Friday on my calendar where I'm doing this type of research and work and just trying to.
Caught up, right? So how are we enabling that in the organization? And how is it a part of performance conversations, right? If someone is producing work slop with ai, are we addressing that really specifically? So these, I think, again, are all things that we are so well equipped to do. And people often look to us to do these things.
And so would love to hear again, just in the chat, highlight some of these things that you have done that you have found to be really impactful for sparking motivation for people. Particularly if you feel like your organization has had fear or concern around rolling these things out. How are you winning over hearts and minds?
Because not all of us are, charging forward with as much excitement as it may seem, I think sometimes with some of the feedback we get from social media and things like that, and I think that's perfectly normal. Again, another great example of what that might look like. We've got a customer here at home that is hosting these learning jams.
I love also all the names that people are coming up with these things. The branding is on point. And so this is again, highlighting case studies of how people are using AI in their work. In really practical applications. Again, I think the key point here is not just highlighting the task, but also highlighting the possibility of what comes from that, right?
If we've reduced time spent on something by 50%, what does that allow us to do with our other 50% of time, right? Is that I get to spend more time with clients, right? I get to spend more time on side projects that are gonna drive the business forward, right? Those are things that I think are really important to be able to highlight.
And then this is one of the first customers in our book that I've seen that is actually actively building AI capabilities into their competency model and weaving that very early days into their performance reviews, right? So this is something that is gonna get more mature for them as people get more capable.
But I think it's amazing that it's already starting to show up in ways like this and to be built into the fabric of where the organization is going. And so here's some great examples of some impact stories and what that might look like. I think this is something that you could do live.
If you have the capacity where people are able to share and tell their own stories, you could curate these things and kind of source them from other people. Or you could just have a document or a Confluence page or something where people can come and contribute their own stories.
And then you can come back and highlight some of the ones that you wanna see later. But making these things in a variety of these things really visible I think is helpful so that it's not like just your engineering team feels like they know exactly how AI is gonna help them in their roles, but they've got other resources and times of application that they're able to use that as well.
If you've got any AI impact stories that you wanna drop in the chat, even just quick wins of how you're using it in your role, we can start our own little. Impact stories library in the chat as well. But what are some of the ways that you feel like it's helped you in your role as an HR or people or l and d professional?
'Cause we've gotta, we've gotta see the benefits of this too, right? All right, so we're gonna wrap up this portion. This is that full picture again with some of those questions that we can reflect on. And just remembering that adoption of those behaviors that we're looking for is gonna come from alignment in all three of these areas.
And I think what I love most about this model is that it helps us to diagnose, not blame. So instead of saying, why aren't people doing this right? We can say, okay, what's missing and where can we come in? And be really supportive. We have come up with a audit tool that we're gonna drop in the chat that you can utilize.
If I have a feeling based on what I'm seeing in the chat, all of you already have a gut reaction of where you think your organization could use some additional help. But if you wanted to do this as a thought exercise or spend a little bit more time reflecting on it, we'll drop the link to that resource in the chat.
You'll see it. I think we'll do we have A-P-D-F-I think that we'll put in. Yep. Thank you so much, ANU. Just a great, again, way to come back to what we've talked through today and reflect on it afterwards. I would love to hear, feel free to open the audit down the, and download it, but even just gut instincts.
In the chat, we'd love to see where everyone thinks their organization needs the most support right now. So is it capability? Opportunity or motivation? Let's see what we've got. Let's waterfall that in the chat.
Here's a little cheat sheet if you need it. Motivation, opportunity, motivation.
Capability, capability, opportunity. Yeah. Quite the spread. And I'm really not surprised to see that. I think that again, we're all kinda out here trying to figure this stuff out on our own. And and different organizations have had different strategies thus far. But these are real barriers that we've gotta get through.
And I think once we've identified. Those barriers. Then again, that we can provide a cheat sheet mapping to some of the solutions that we talked through on this webinar, right? So if you've identified that capability is the issue. Then we're gonna go to some of these solutions on the right hand side that are focused on things like providing more training tools and guidance.
But if it's things like opportunity, we're gonna be thinking more systems, environment, and social support. So again, some of the solutions that we talked through on the right hand side there. And if it's motivation, we need to be thinking how are we influencing habits, values incentives and culture around.
The behavior. And so I think this is a really great just kind of brief reference sheet of, we've identified this as the challenge, here's what we can actively do about it. And then it's up to us to determine what do we think is most feasible for us to achieve. And I think it's important for us to know that this behavior change is rarely going to hinge on a single intervention.
And our strongest results are usually gonna come from layering a couple of these smaller things. We've already got full plates. AI is not our only priority even though it might be to some other elements of the business. And so we can layer really small moves, right? Throw out a judgment rubric here.
Give people a demo there. Share some success stories to inspire people, and that compounding effect will have a lot of influence on the ways that people are able to drive some of this behavior change. So don't over, don't overwork ourselves as we go into this either. And so as we wrap up the kind of formal part of this presentation, I do wanna feed back to where we started and now you've got AI Olive on your screen.
That's the perfect marrying of the two. But at the beginning I shared my story about Olive, about. How, what felt really strange and uncertain finally became a part of my everyday life. Right now, every morning I go out and I fill this pig swimming pool so that she has something to hang out in. I never, I'm sometimes so surprised by how, just like ordinary it feels to have.
A pig is a pet. And yes, Becky, she's absolutely real. She has a sister now named Penelope because I couldn't have just one. I think they're the best. But I really do think that this is like ultimately what AI adoption is about, right? We have to take this thing that feels so. Ridiculous and extraordinary and intimidating to people and make it a part of this ordinary fabric of work that can ultimately bring them joy and creativity and amazing transformations in our organization.
And I think based on the chat from today, so many of you are already doing so much work to do that for your people. And I just thank you all so much for being willing to share your experiences with each other here, ask thoughtful questions because our employees really are looking to us and we really are all in this together and have the opportunity to support each other in some really amazing ways.
I know we're running up on time here. Didn't get too many questions in the chat and so we'll make sure that we can get to those. But we did promise you as well a quick overview of some of the capabilities of Hone ai. So do you want to just highlight that really quickly? If you stick on for another couple minutes, I've got a quick little surprise as well.
But some of the things that we talked about today that capability, opportunity, and motivation are exactly what Hone is building towards with our own AI capabilities. And so when we think about what we're building here, we're trying not to just teach people about ai, but we want to use AI itself to transform how learning is happening, right?
So what we're building is really meant to bring learning closer into the flow of work, right? Make it more accessible, make it more personalized, make it more impactful. And what I think it's really done is make, made on demand learning. Just this incredibly powerful, interactive experience, right? So for us, we now have AI led training.
It's gonna be bite-sized, 12 to 15 minute interactive lessons that are still gonna provide frameworks, opportunities for practice, it's voice led. And so it becomes way more effective than traditional e-learning, but can be done in the same amount of time. I mentioned a little bit that AI practice and how I've been utilizing it.
This is more specific than just broad role play. I'm actually coming in and saying, I've gotta deliver this feedback conversation in 30 minutes. Can you help me make sure that I've got all my boxes ticked? And the AI tool is really intelligent. If you're pHone it in or if you're really amped up about it, it can tell that tone and it'll be like, Hey, what do we need to do to get you into a better head space?
Going into this conversation. Then last but not least, launching today, which is very exciting, is our AI coach. So this is your co-pilot, your always available partner that kind of lives in your ecosystem to help employees tackle challenges. I got this feedback. How do I implement this into my IDP?
It's always there and on for you as a people partner. And we'll also help them to find other related experiences on demand through the home platform. I think what I'm most excited about as someone who, again, is not an AI expert, but is that these solutions make it feel like learning is less.
So this just in time event, I'm sorry, I'm totally losing my voice at the end of this. But learning feels less like this just in time or like single event that people have to do with tools like this that can become blended into just how we get work done today. Learning is hand in hand with all of the things that we do in our jobs, and I think that is what a lot of us have been hoping that learning or people support can look like moving forward.
So AI has been a huge unlock for those of you that have stuck with me to the end. Thank you so much. I would love to give you all two week free trial to have people try Hone AI for free. So you have got the QR code and the learning leaders link there If you would like this will be available to you.
You can reach out to us to directly, if you attended and you didn't have a chance to grab this, but please hop into the Hone AI platform, play with it, try it out. Let us know your feedback and ask us any questions about it. But again, thank you all so much for everything that you do for your people.
Really excited to have this space where we could come together to talk about all these things. And you will be getting a follow up email with this slide deck and the recording will be available. So again, just thank you all so much.