[00:00:00] Tom Griffiths: Thanks everyone who's joining. What we have for you this time is an amazing conversation that we're about to have with two friends of mine, friends of Hone. Marcus and Brandon.
[00:00:11] Marcus is an esteemed strategist, consultant, researcher in the AI space and has an amazing perspective having studied many companies in their adoption cycle of AI for what's working and what's not. Brandon incredible career in leading learning and amazing organizations like docebo, Starbucks, Walmart, Delta, Microsoft, and others.
[00:00:34] Guys, welcome. Thanks for doing this.
[00:00:37] Markus Bernhardt: Good
[00:00:37] Brandon Carson: morning. Thank you.
[00:00:38] Markus Bernhardt: Good morning, Tom. Yeah, good to be here and looking forward to this one.
[00:00:42] Brandon Carson: Yeah,
[00:00:43] Tom Griffiths: absolutely. We will get right into it. We are here to talk about the AI gap. What is the AI gap? By that we mean there's an enormous amount of enthusiasm as we heard in the chat and a lot of exploration.
[00:00:56] Which is probably the number one word that's coming up from the audience right now. But we're not necessarily seeing that translate into business results just yet. A study from McKinsey showed the same about two thirds of organizations still stuck in the prototyping and exploration phase as opposed to fully operationalizing.
[00:01:14] So Marcus, starting with you, I'd love to hear do you see this as a real gap? Is the research right? What is this gap we speak of?
[00:01:22] Markus Bernhardt: I agree fully. I think from my experience, I tend to see. Organizations struggle at different stages. And it depends on, obviously on their adoption cycle and how aggressively or how much they've been leaning in on this.
[00:01:36] I generally talk about the surface wave and the undercurrent here to give it two chapters. The surface wave for me is that initial piece. That is all bling activity, that is prompting, that is the copilots. And that is. Improving the efficiency for the individual. So people are utilizing it in their workflow.
[00:01:55] Often they start by summarizing PDFs, but in the meantime they get to writing emails and doing far more interesting things. The reason I call that the surface wave is that usually after this, boom. In personal efficiency, they then go back to writing another email or logging it on the CRM the way they would've years ago.
[00:02:13] And so the workflow is exactly what we used to have, the old piece, but in their own little bubble, they're becoming more and more productive. The problem is that speed doesn't automatically translate into more efficiency and into any ROI and no. In addition, we see additional problems such as the.
[00:02:33] Hyper version of the reply all email. If everyone is writing five x as many emails with five x as much content because it's copilot, co-written, and everyone else has to read them and scour through them, here we go. There's a lot of lack of efficiency in there as well, and that's why. Then describe the phase two really as the undercurrent.
[00:02:52] How are the ways of working changing, how are the workflows changing? How are sign-offs changing? And. There. It really becomes about workflow systems architecture. If we want to bring more agent pieces into the workflow, we need to have a clearer understanding of what decision rights there are, what runtime oversight there is, and how those people who used to do the job.
[00:03:14] Personally now become architects of the system and help keep it running because these systems can be very brittle and need looking after. And those are the two big chapters. Organizations are mostly stuck in chapter one in the surface wave when it comes to training and upskilling their teams and rolling out policy and governance and yeah, there's no.
[00:03:34] There's no blame here, right? Everyone needs to go through this transition. Everyone in their sector with their regulatory challenges, need, needs to see when is the right moment to kick this off. And organizations are facing those gaps from my point of view. So fully agree with the research there.
[00:03:52] Tom Griffiths: Yeah.
[00:03:52] Thanks Marcus. Love that two wave or two chapter model. Brandon, how does this align with what you're seeing?
[00:03:59] Brandon Carson: Yeah I concur. I think there's three core gaps right now. But to me the first rule is this, AI has to fit your business model, your talent strategy, and your culture. And when I think about those three core gaps, there's a leadership gap, there's a a talent gap, and then there's a tools gap.
[00:04:19] And when I think of the leadership gap, there's five questions I would ask every leader to ask themselves. When it comes to the business transformation that's going on today, one of those is what does it mean for me? The second is, what does it mean for my team? The third is, what does it mean for the company?
[00:04:38] The fourth is, what does it mean for this industry to Marcus's point on the sector you're in? And the fifth is, what does it mean to our communities in which we operate in society and in general? And the other thing to understand in this leadership gap. Challenge here, which I think is the most critical as we embark on this evolution, is that leaders should be aware.
[00:04:59] There's it's really hard to predict. There are potentially multiple futures, so it's really gonna be hard to predict one future from where we are right now with ai. And I think also for leadership it's important to recognize. What I call the four eyes of leadership that AI can't replace, and these are important behaviors and attributes, instinct, intuition, which really is about knowing without knowing why and imagination and the most important of all, integrity.
[00:05:32] The in this period of uncertainty, it's really important for leaders to. Use those as anchors as they lead through the transformation. It's a critical, critical period, if you will, in how we're moving forward. And leadership tone at the top, all of these things is critically important.
[00:05:52] One critical gap right now is in decision quality. I think leaders more than ever, need to establish trust, embrace not knowing, engage in dialogue, distribute authority, basically systems leadership. So I think that's a big sort of gap to focus on for organizations, the leadership gap. And then of course there's talent gap.
[00:06:13] I would say talent strategies are critical. The we know one thing for sure, we may have multiple futures, but we do know one thing that the world's not gonna stay static. We still need to hire for technical and foundational skills. They need to be able to add insight as well. So when you think about hiring, think about those technical foundational skills, but also how.
[00:06:35] We all need to be able to derive insight, and I would argue we need to move away from human in the loop to human in the lead, which is a Julie Sweet quote I heard from Accenture. And I thought that was really fantastic because this means the humans are held accountable and so we definitely need to understand the nuance of, human in the lead.
[00:06:57] Those kinds of things. I do, I've been hearing we're all hearing these things and it's hard to predict. But my advice is to think about what, and look at the tech companies and what they're doing. They're the leading indicators of some of the stuff that's what, gonna be happening.
[00:07:11] The next 18 to 36 months is gonna tell us a lot more about, the job situation. Since 2023, tech companies alone have closed over, they've had over 600,000 layoffs. It's been somewhat of a bloodbath. But I would see these companies as a forecast of what's coming for the broader companies as well.
[00:07:34] So I would say the most important skill in the talent space right now for for everyone you know, out there is like, let's look at deep data skills. The ability to derive insight from that data, their ability to craft stories. And all of that. And then there's the tools gap. So there's two critical aspects to the tools gaps.
[00:07:52] And I'll talk about adoption versus absorption. So if you think about adoption, you're using ai great. Most companies are having some challenge here. You need to provide the workforce guidance in room for experimentation. And a lot of times that translates to time. And you need a good 70 to 80% of your workforce to adopt AI for it to be.
[00:08:15] Successful. And of course absorption is about deep usage. How are people spending their time with using ai, leveraging it, augmenting themselves with that? Absorption is really about moving everyone to deep usage. So I think those three gaps exist and it's just what are we doing within those areas, each one of those gap areas.
[00:08:35] Tom Griffiths: No thanks Brendan. And then there's a, an analog there between Marcus's two waves surface and then the deeper wave, and then adoption at the surface level and absorption at the organizational level. So I like that connection. And then rather than one gap, the three gaps that you mentioned, the leadership gap, the talent gap, and the tools gap all need to be solved.
[00:08:57] As I was listening to you, I was thinking, I've seen companies. Close one of these gaps, but not close the others. And that's not enough. It's not enough to just give people the tools. It's not an and close the tools gap, but not give them the space or the leadership to use them. It's not enough as a leader to say, Hey, we're all on board.
[00:09:16] We're gonna be an AI first organization, but not have the tools or the talent to make that a reality. So you really do need to close all those three together to make it all work. Curious Brandon, just given your experience, do you see a difference here between bigger companies, big enterprise, and then, smaller companies, scale ups or startups?
[00:09:37] Brandon Carson: Honestly, the, you know what the biggest use case in the enterprise for AI right now, it's to create PowerPoint decks and help with writing.
[00:09:44] So, when, I think scale to me is not about size, it's more about impact.
[00:09:49] Tom Griffiths: Yeah.
[00:09:50] Brandon Carson: And I think right now we're in that sort of rough time of trying to figure out where is it, most helpful to, to help the org to help me, to help the organization and all of that.
[00:10:03] And I, so I think it's important to identify your ROI, and I'm gonna argue it's probably not revenue right now. What are the statist, what are the things that matter? And I would say. I don't see a lot of difference between a midsize company, an enterprise company, other than maybe some of the bureaucracy.
[00:10:20] But it's funny when you get multiple humans together, bureaucracy just emanates as it is. Kinda a lot of how we operate take long.
[00:10:27] Tom Griffiths: Yeah.
[00:10:27] Brandon Carson: But I think one thing, a couple of things are clear now we can see when we talk about AI transformation, small, medium, large sized company tech is really not.
[00:10:38] The problem tech is the easy part,
[00:10:40] Markus Bernhardt: right?
[00:10:40] Brandon Carson: I would say there's no such thing as best practices. I think AI is a hyper-personalized to the company sort of situation. It's gonna be unique for a lot, for all these companies. And I would say what really could help you is to choose the medium-sized problems that you think you can solve with ai.
[00:10:59] Not small ones, not big ones. But look at your workflows to Marcus's point earlier, look at your business KPIs, track how people use ai, the models, the capability gaps, identify your power users, scale them by building champions. All of these kinds of things. So I don't think it's necessarily a challenge with small to medium sized medium businesses.
[00:11:24] The bigger challenge, and this is something that we're going to see right now if you think about traditional companies. Versus AI native companies, and I think that's
[00:11:34] The bigger challenge, the major fallacy the enterprise is living in right now is the simple fact that I'm not hearing a lot of them taking AI native competitors really seriously.
[00:11:45] And it's not because they might be faster or cheaper, it's because AI native companies are structuring around different economic drivers and different decision mechanisms. So the core strategic problem and medium to large size enterprises is they optimize around scale efficiency of existing processes and or incremental innovation.
[00:12:07] And the AI native companies. Are optimizing around continuous learning loops and they're treating data as the engine of competitive advantage. It's subtle, but it's really different in how the mindset is inside these organizations. So again, scale's not about numbers, it's really about that impact. But we need a mindset shift inside legacy companies along with the transformation because once the AI native companies come to market, they're really going to surge far beyond you if you're still stuck in.
[00:12:40] Optimizing around existing processes and not seeing around the edges of these kinds of things.
[00:12:47] Tom Griffiths: Yeah, totally agree. And I think. AI is often built, certainly at bigger companies. As an efficiency driver, we can do the same amount of work with fewer people, less investment. And so let's do the same old thing but more efficiently.
[00:13:02] Whereas for AI native companies, it's really an opportunity creator. We can run a hundred x as many experiments or ship a bunch of features faster and fix them. While they're out in the wild, rather than, have a big long planning cycle and find opportunities that we wouldn't have had otherwise because of the new way of working or the new technology.
[00:13:20] So, it needs to be a mindset shift, like you said, to capture new opportunities and velocity as well as just efficiency.
[00:13:27] Brandon Carson: Yeah, I, a hundred percent I would say size is actually, size can be a disadvantage for scaling ai. You have turf battles, you have silos. Change management's critical. Can you really move fast?
[00:13:38] You need to be able to move fast. And inside legacy companies, there's too many or traditional companies, there's so much bureaucracy. How do you, and that has to come from the top. We talk about, and Marcus is really good about talking about top down, bottom up. But the change management that's needed has to start at the top
[00:13:56] Tom Griffiths: a hundred percent.
[00:13:57] Marcus.
[00:13:58] Markus Bernhardt: I was just gonna throw in. Yes. On that topic of smaller orgs move faster because data and decision making are closer together, there's less friction. They tend to be fewer handoffs, enterprises, we get stuck in more resources tooling, heavier governance uneven access, fragmented workflows.
[00:14:14] Those things all obviously play a role. But what Brandon really beautifully described there is. It's so hyper-personalized that every function and every team with it in that function and every sub-team within every team needs to be doing something slightly differently with it than everyone else in order to make their work and their workflow.
[00:14:35] Op optimized not just for the individual's efficiency in their own little bubble on their own computer, but for an, for a team, for a function and for this to work, there has to be, there has to be trust in place, and the trust doesn't come from happy clappy positivity. There have trust comes from transparency and governance.
[00:14:53] We have to be able to say to people, you can go and run away with these things. If you stick to these guardrails that we've given you, and you need to, in your function and your team need to figure out how you're best using the tool and how you're lifting your function to the next level and how you are evolving.
[00:15:10] And so that transparency is key, and that's the guidance from the top, but that initiates the bottom up. The people who will know best how to get the job done are the people who've been getting it done for the last three years in your org. And if they know enough about ai, they will also have fantastic ideas how to utilize AI going forward.
[00:15:29] So those organizations that are really focusing purely on top down, they're gonna miss all that energy from bottom up. And they will have teams that are disenfranchised by the fact that they're not getting clear guidance.
[00:15:42] Brandon Carson: Yeah.
[00:15:42] Markus Bernhardt: If the positivity is there, guidance needs to be there.
[00:15:45] Brandon.
[00:15:46] Brandon Carson: Yeah. I think what's really important the. The tone at the top and the change man, the change leadership really is what's needed. And once that is fired off, that drives momentum from the, bottom, from the, like you're saying, mark is the people that actually do the job, right?
[00:16:02] That momentum is what leadership needs to be able to accelerate, especially in these larger organizations, and leadership needs to know strategically what's possible and what makes the most sense, and that will drive the momentum from the bottom of. To feel like everyone's connected culturally to what matters most in this hyper-personalized world and what we're doing with it and what we're going to be able to excel, with and meet our KPIs and all the things that business needs us to do.
[00:16:32] Tom Griffiths: Yeah. Yeah. No, that's a great point. And actually a nice transition to our first poll because what we've talked about is, techno, there is a gap, but it isn't necessarily the technology. It's things like leadership, talent and the way that we roll out our tooling. So what is holding us back from closing that gap?
[00:16:50] We'd love to hear from the audience. If you can just click your answer on this poll. What is the biggest blocker to AI impact right now? Is it the AI tools that haven't been approved? Is it unclear how you should use AI right now? People don't trust it or act on its suggestions. It feels hard to measure or to Marcus's point, is it a lack of.
[00:17:11] Clear rules or guardrails. We'll just leave this up a few more seconds and then we'll take a look at the results.
[00:17:25] All right. I knew perhaps we can close the poll and see how it's looking. Okay. Interesting. So it's not a lack of trust and that scored zero. It's interesting. It was really a lack of clarity. Unclear how people should use AI at work. Chaps, any reaction to this?
[00:17:43] Markus Bernhardt: For me, this is a clear picture of the.
[00:17:45] Of the surface wave, right?
[00:17:47] People want clear guidelines on how to use it and what they usually mean by that question is, how do I use it on my computer today for the next few tasks?
[00:17:54] Tom Griffiths: Yeah,
[00:17:55] Markus Bernhardt: That's the efficiency spiel. And within that, the worker is normally informed and experienced enough to be able to be the expert in the loop that says this was nonsense and this was actually useful.
[00:18:07] So we have that check and balance in, in place. V very unsurprising for me that the trust. Is at zero. The trust question, the there is no trust gap. That's because in, in order to start experiencing that, you have to have AI embedded in workflows, and the agentic piece has to actually have agency.
[00:18:28] If an agent actually makes a decision on something, A or B, then you have to be able to trust that system, and you have to ensure that you've set the workflow up properly. So that it is making the right decisions, and if the right data isn't available to make that decision, it gets flagged to a human.
[00:18:44] Those are more workflow issues that I would expect would come then in, in the next wave in those organizations. But that's my immediate gut reaction.
[00:18:52] Brandon Carson: Yeah. I, it was interesting to see no, no trust problem listed. I do think we're in a, we're in a big state of uncertainty.
[00:19:02] And, companies are to mar to Marcus's point about fragmentation, we could go on for hours on this, but companies are already overwhelmed with the disconnected systems. Governance spread across clouds, data platforms, applications. AI right now is making that fragmentation more visible and in many cases, more acute.
[00:19:24] And I think once. Once we're out of this sort of predicting the next letter in Gen ai, once we're out of this sort of phase of ai, which we'll look back on and probably, commiserate about how this was almost like when we had the, I thought I was really kicking it when I had my 9,600 K BOD modem, right?
[00:19:44] And then I got the 12.2 and I thought I was on fire. So we'll probably look back at this once the agen systems are really augmenting. Humans in the workplace and we'll see what once they start getting deployed everywhere and how different that will be, but we have to be careful about how we even construct that infrastructure to enable that.
[00:20:06] Because we've gotta be, if we move too fast from not giving the workforce, the human workforce time and agency to be able to develop capabilities to understand. The task separation and the things that you know, humans do best age, the age agentic stuff will do best at ai, all of that kind of stuff.
[00:20:27] If we move too fast, then we're going to add to complexity and fragmentation. If we're not careful, we need. The, our agentic systems to have as much context about the job as humans do, and as they get more capable, the opportunity gap between what the models can do and what teams can actually deploy may grow.
[00:20:48] So we just have to really use this moment in time to close those gaps that we mentioned. There are leadership gaps, there are talent gaps, there are tools, gaps, and we need to give our people time and guidance and direction. To get beyond the fear and the anxiety. Like I said, the 600,000 layoffs in tech and the layoffs that are coming we heard Jamie Diamond, we heard Doug from Walmart tell us, the CEOs are now telling us every job will be impacted by ai.
[00:21:24] And so this is the call to action for the people functions. To work with the business functions to co-own the skills agenda, to make sure that the capability and capacity for the humans to do the mindset shift, understand what their in inte, what machine intelligence is bringing them to make it better for them all in the guise of doing one thing that's really important.
[00:21:49] How do we make work better for humans? That's the number one question we should be asking as we're working with the functions, the business to co-own and co-create the, agency that both AI will be bringing and humans will be bringing. We don't, we're not gonna have a lot of moments in time to think about this.
[00:22:11] This is that moment as we're rewiring the architecture and as Marcus says, AI will become the infrastructure I remember. In the late nineties, I was in publishing and I remember asking my boss, I'm like, Hey, I think we need internet on everyone's workstation. And at that time, Netscape Navigator was $45 a license.
[00:22:32] It wasn't a free browser. And he's like, I need you to write a justification for the internet. And so we're writing justifications for AI in some regard, but AI is going to be like electricity. It's going to be just a part of our work lives, right? It that's gonna happen, but we need to take this moment and make sure that we are giving the right guidance to our workforce.
[00:22:53] Markus Bernhardt: Yeah. If I may add, measure the right thing,
[00:22:57] Brandon Carson: yes.
[00:22:57] Markus Bernhardt: No one has calculated the ROI on the M on the Microsoft office package for an for a large enterprise, that's not how it works. That doesn't mean that I say do not measure your value and do not have KPIs. But there's gotta be a measured approach to this. And yeah, I think Brandon's point to write a justification for to the internet is is a, another hilarious one.
[00:23:20] Yeah.
[00:23:20] Tom Griffiths: Yeah. No, I'd love to ask you guys in a second for some specific examples who, of organizations you think are doing some of this really well. But. Did want to just recognize a distinction that both of you made in reaction to the poll, which is the difference between two flavors of ai right now, there's the assistant approach, and then there's the agent approach.
[00:23:44] The assistant approach is probably the, the more familiar one for most people right now, where you have the chat GPT window. And like we said at the start, it's writing your email or your document or it's reviewing something for you and it's often a one shot or a conversation but there's human in the loop and you retain the judgment and the decision making.
[00:24:04] And yes, it makes you more efficient, but perhaps doesn't change the world. Second flavor is the agentic flavor where we're actually automating work or decisions. Themselves are sitting with ai and it's taking action on our behalf. And that's where the kind of workflow automation comes and the huge organizational leverage comes.
[00:24:27] But I, I would argue that. The gaps in leadership, in talent, in tooling, kind of 10 x when you go from the assistant, the simplicity in some sense of the assistant world to the agentic world. 'cause it's just so much more complex. And it's worth folks recognizing that distinction and understanding that even in the last few months the availability and capability of the agent approach is much more, present. And so, worth keeping up with that version of ai, not just thinking of assistance.
[00:24:55] Markus Bernhardt: For me, there's a real key point here. We've had a lot of agent washing, so to speak, going on
[00:25:01] Tom Griffiths: Yeah.
[00:25:02] Markus Bernhardt: The last few months. And it is not helping us that people are worried about agents taking their jobs and everything is now age agentic and it's this huge thing.
[00:25:11] And that's just too much to cope with. For most people who don't have time to look into this every day, and I think we have to. We have to sometimes take a little step back and say, what are the next steps? And you've outlined this really nicely. There's the individual piece, and then there's the maybe automation slash agentic piece, and that's a really crucial distinction.
[00:25:29] After we do individual stuff, we can move to automation. Automation is nothing scary and nothing new. Every CRM sends out an automatic email. Some the moment someone booked a demo. That is automation. That's not ag agentic. You can call it ag agentic if you really want to, but it's an automation piece that we've forever had.
[00:25:48] It just does that, and it sends the email automatically because there was a trigger and there can be no hallucination there because it's an if then piece. And a lot of the automation pieces, even the ones with AI, can be done without any large language model, but with class, more traditional ai where we have if then pieces and then it does something.
[00:26:10] And so one doesn't have to say, oh, I'm now this agent person and a vibe coder. There, there are simple steps that we can move to next to explore this in our teams and to take the next steps forward and experience it and see what the value is. And then of course we can explore that further in terms of what the actual agency is.
[00:26:29] The automation piece that sends the email automatically has no agency. It was told if someone signs up for a demo, you send the email out. That's what it's done. If I tell a GPT to every Monday morning, research the news for me and come back with a bulletin, I've given it no agency. I've told it exactly what to do, when, how to do it, and when to come back to me.
[00:26:47] Yes, it did a little bit of work on its own. But we don't have to throw the word age agentic in there to make that super complicated. Then it really gets trust pieces when I've built a longer workflow with several steps in it and I need the right data to make those decisions.
[00:27:03] And if I have, then the system can make a decision and move forward. If it doesn't, it flags it to a human and then only very late in the process do LLM components, maybe come in as part of that as well. And then you can have hallucinations in these things. But, in order to jump on this whole process, you have to get back to your basics first.
[00:27:22] The first thing is people have to understand how the co-pilot works and why it functions really well for certain things and doesn't for others, and they need to get onto that learning curve. And only once they've done that, are they ready to think about some workflow automation piece and maybe the first early pieces of something agentic.
[00:27:41] But for me, a lot of agentic pieces have zero agency, and my test is always when you help, when you get GPT to help you with your holiday planning does it give you suggestions and you review them and you do the booking? Or did you tell it to take your credit card, run off and book the whole damn thing?
[00:27:57] The latter would be agency. The first is an automation piece with that brings back information that you review. And so point is we're making this sound so much more complicated. Through all the marketing out there that people are rightfully scared to take the next steps because they always think the next step is just this gigantic step into the unknown.
[00:28:18] And if we simplify it a little bit, we can see no, there are next steps right in front of us that we can take. And they all move us in, our teams in the right direction. So the skilling and the trust journey isn't as big as the marketing ploys make us maybe believe.
[00:28:32] Tom Griffiths: Yeah.
[00:28:33] Great clarification. Thanks, Marcus. Brendan,
[00:28:36] Brandon Carson: I do think that, so I'm gonna be a little provocative here. I think that this is a moment where. The so, so the people function. If you take the last quarter, century, 25 years, right? The people function has represented itself really well. When you think about it and we're meeting the moment, I really firmly believe we are able, and many of us are meeting this moment.
[00:29:01] We've had three major moments in the last quarter century. We've had the rise of the internet itself, right? Business models changed. The way we build capability amongst the workforce changed and we grabbed that moment. I remember I was in the valley during that time and all these tech companies were like, get everything online.
[00:29:20] We're going to, do mass training. That kind of stuff. Training at scale, at real scale, that kind of thing, building capability. Then we had the cloud and I think a lot of us remember y some of us remember Y 2K the night we were waiting, with our bated breath. But what the cloud, what our move to the cloud, which was assertive and fast and sign, and the most significant move from a technology perspective.
[00:29:43] For corporations that's ever been done, right? This was massive and it laid the foundation for this third major milestone where we're integrating all of this intelligence into the workflow, right? We needed these steps and HR L and d was really very proactive. Getting their seat at the table, becoming the table in some regards, and setting that table and doing the, a lot of the right things.
[00:30:10] During these challenges that we've had. These milestones, I firmly believe the ones that, the people I'm talking to, we're meeting this moment. But we, there are some things that we have to figure out in a cultural context, and we have to make some big bets in some of these things. Like, we don't know.
[00:30:27] There's multiple futures potentially. We don't know what future we're gonna be a part of, right? But I firmly believe that the companies that doubled down. On continuing to hire entry-level workers you need to do that because in three to five years, you will be ahead of these AI native companies by bringing in this talent into your organization.
[00:30:49] In helping them build the expertise that's unique to your business. So just because AI can do all the coding, don't stop hiring junior software engineers, double down on it. And then what you need to do though around that is to rewire entry-level job descriptions. Because right now, yeah, AI can do a lot of these administrative tasks that we used to make interns and new hires do and that kind of stuff.
[00:31:15] But let's make sure that we rewire those jobs so that we're building the expertise, the human expertise that our companies need as we evolve our ways of working. And we're gonna have to do it faster. And I think HR and l and D is well set up to lead that transformation forward. And to do that, I'm talking to large organizations where CHS and CLOs are deep in it.
[00:31:40] Bringing this transformation forward. Sometimes our roles are very limited and our remits can be narrow, but we're having those conversations. And so I think that's one thing, right? Do I would say double down on your entry level hiring. Don't give up on people at that level, right?
[00:31:56] Because they, you, your company needs to be an incubator for that expertise that's unique to you and. The other thing I'd throw out there, which might be a little provocative, but let's think about can companies create subsidies? So let's say AI begins, AI will replace some jobs, right? Traditionally held by humans.
[00:32:21] There will be profits gained from bringing intelligence in automation, robotics, all these kinds of things. That will the profits will drive. So can you create subsidies to offset this loss? Like if business pays AI to do high value labor at scale, for example, which was once the job of human employees, can you pay surcharges or subsidies so that displaced workers beyond typical unemployment can have the skilling opportunities necessary?
[00:32:53] Think about those 600,000 tech layoffs. What are those companies doing to help re-skill those individuals for other types of jobs, even beyond their companies. So I think companies are gonna have to have this sort of conversation with themselves as they automate, augment, displace, what's their responsibility overall, right?
[00:33:13] So I'd say those two things are really important. Double down on entry level hiring. And figure out how you can create subsidies for skilling beyond just your workforce at your company.
[00:33:23] Tom Griffiths: Yeah. Yeah, it's a really great point about entry level hiring because I think COVID I showed us and is still showing us the negative effects that can happen when you disrupt that early career pipeline.
[00:33:36] And don't have the same kind of early onboarding, early career experience. 'cause those folks are a few years into their roles now. The management pipeline has been disrupted 'cause people haven't necessarily built the skills and the relationships that you would normally have had in that flow of labor.
[00:33:52] And I also have heard on the upside companies, even Google getting really excited about their most junior employees coming in. Natively with these tools. Being very comfortable with new paradigms of work, having just invested in themselves, figuring it out and able to bring some new practices to the teams that they're joining.
[00:34:13] So I think there's things that we can learn from that talent as well. Yeah, I would love to hear, on this optimistic note who are you seeing in terms of real world examples? Embracing the challenge? Yeah, we'd love to hear some examples.
[00:34:29] Markus Bernhardt: There, there are tons of use cases out there.
[00:34:31] Where people are embracing the challenge really well. I've, I myself publish a few of those in my Endeavor report obviously. So sorry for the shameless plug. And, it and no, we can see that there, there are great examples where we look at both parts of the wave, the in the surface wave.
[00:34:48] There's some great examples of how upskilling is happening, how companies are providing the resources for the. For the workers to get to know the tools and to start to experiment. To Brandon's point, there are some fantastic examples out there where junior people, maybe in a marketing position or similar, came in and automated a piece of a workflow and said to their boss, here, I've automated this.
[00:35:11] And there's even stories about out there where the boss apparently responded you've just got yourself out of that part of work. Why? So we're seeing great stories and it's about, it's in this day and age, it's about doing this in real time. People who are doing this well are not sending you to an e-learning.
[00:35:30] They this in real time through workshops and in-person, I always say the whole world hasn't gone digital. We, there's workshops and in-person stuff that can be done and people need to practice and experience and exchange thoughts to, to grow quicker than on their own on their own.
[00:35:46] And so that's really key from a growth perspective. And, teams are doing fantastic things in Slack and teams, for example, to help nudge people in the right direction, to give them upskilling opportunities to say, you, you are on the marketing team. Why don't you try a prompt like this today and and just play around with it for five minutes. And the same for other teams. So fantastic use cases on the individual level. And then also also I've seen some really cool use cases about. When it comes to automating workflows and getting the people who used to do the role to become the architects of the workflow.
[00:36:23] Tom Griffiths: Yep.
[00:36:23] Markus Bernhardt: So my favorite is an insurance company that looked at automating a small fraction of all the claims under 150 bucks. And they didn't want to they didn't start off with we're gonna, we're gonna automate all of it. They started off saying, we can the easiest, tiniest traction at the bottom, that is really the simplest to sign off.
[00:36:43] Let's automate those. And you embark on that journey and you build, you, you start building the project and you realize that to make those decisions, you need to have certain information in place and checked that's, it's not as straightforward as it sounds, when a person goes and pulls up a file on A CRM, they've already made sure that it's the right customer.
[00:37:01] And that they've connected the claim to the right customer. That's something they did unconsciously because they would've noticed if it wasn't suddenly the right name or it was a different id.
[00:37:11] So there, there are technical things there that we start need to start looking at when it comes to flow, workflow systems, architecture, and then they look at how those decisions can be made with the information that is available or need to be flagged.
[00:37:22] And then you have a system in place that can automate a certain part of the process. We then tend to run this for two or three weeks in parallel with the people doing the job and neither what the other decided. So the people do the job and they actually interact with the client and the AI does the job on the side and is just a reference value for later on.
[00:37:43] And after two to three weeks you compare notes and organizations realize the humans were never as consistent as we thought they were. So we've uncovered inconsistencies in the workflow when it comes to how the team did the job, and we realize where the AI broke. Sometimes in, in areas where we didn't foresee it at all, and sometimes in very obvious ones, but after three weeks or two, three weeks of running in parallel, you know what you need to do to improve the AI system.
[00:38:09] And you need to know, what you need to review when it comes to the humans who've been doing the work. So after maybe six to 12 weeks, you're in a position where you can say, we wanted to automate that fraction. We now automated this fraction.
[00:38:22] Tom Griffiths: Yeah,
[00:38:23] Markus Bernhardt: but we've learned so much in the process about how to architect such a process, what to watch out for, how brittle it is.
[00:38:31] The moment the policy changes on what a small claim is. You don't just go to the AI and say, the policy has changed. Here's a small update. Update the whole workflow automatically, please. That is disaster waiting to happen. Such a small change in policy has to be looked at much more precisely When the insurance finds out there's a new way of scamming coming up.
[00:38:52] You don't just go to the AI and say, here's a new way of scamming we've discovered. Implemented.
[00:38:57] Brandon Carson: Yeah.
[00:38:57] Markus Bernhardt: The, these automation pieces are far more brittle than we think, and so suddenly. The people who were doing the job are now the architects with and of the system. The ROI in this case is a fantastic ROI, it's that customers get the thing signed off much more quickly.
[00:39:13] Brandon Carson: Yeah.
[00:39:14] Markus Bernhardt: In those cases, sometimes within minutes. Is that an ROI You can calculate directly? Probably difficult because customer satisfaction is a difficult ROI to put into, in, into numbers. But it is huge and so. For me that is a really good use case to show where an organization has gone through the process in this area.
[00:39:35] They've laid no one off. They still need everyone on the team to help architect the workflow. They are now all upskilled and they're looking at automating the next fraction while they're also putting patches and updates into the existing code. And these people are now really, versed at working with the tech team that writes the code and the modules in the background, but also with legal who want to know exactly what gets flagged how, and have we got runtime oversight.
[00:40:05] Implemented in a way that works. And for most teams, especially for insurance teams signing off claims, runtime oversight was not a word sorry, A term that they were used to using ni, neither was data contracts and neither were decision rights. Yeah, but they are now versed with these terms. They can work with the tech team and they're building things that are really of value to their clients.
[00:40:29] So we, we are seeing some fantastic use cases out there. And whether these use cases apply one-to-one to your organization or not, that's not the point. When you look at other successful use cases, I think most brains automatically go, oh, that's interesting, but we've got something. It's not similar.
[00:40:47] We've got something where I just had an idea where we could do something. Totally. Yeah. I think it's very valuable to look at these use cases and learn from them. And if anything, the AI is teaching us network with your human friends and find out who's doing what, who's tried what, who's failed on what, and learn from and with one another.
[00:41:04] Which is why webinars like the one Hone is hosting here today are so useful when we all come together and share what's working and what's not.
[00:41:12] Brandon Carson: Absolutely. I do think that a lot of this is gonna be organic and happen over time, especially in large enterprises. I've worked at one retail company, we had 55,000 jobs in our job architecture.
[00:41:23] We're not gonna pause and go change all those or rewire all of those at one time. We will be identifying the workflows like I would argue. Especially in the people function, the learning function, identify the five to 10 enterprise critical roles that are really critical for the company. That, where there's a lot of less obfuscation, less nuance, but these are really important roles.
[00:41:46] And then dissect what can be automated, what can be, what AI will impact, how do you augment the human workforce in these kinds of things. And those serve as your kind of, your template for how you're going to look at these workflows and. For example, a lot of front end and retail stores are being automated, so we're seeing a lot of automation customers checking themselves out and bagging their groceries and their products and stuff like that.
[00:42:14] But there still needs to be a an employee there to be able to help assist, connect with the customers on a human level. Different skills than just, scanning items through the checkout process, but really being able to troubleshoot and support customers in that process. And so those are nuanced capabilities and skills and potentially new roles that will change within an existing role.
[00:42:41] I think we're gonna see a lot more of that sort of minor to medium sized. Tweaks within an existing role as automation or intelligence augments some of what the human's doing. And so that's a crucial area. I call it an area of skills, volatility, if you will, because you need to connect the workforce to purpose and meaning in all jobs.
[00:43:04] And when there's augmentation or intelligence being brought into that job. How does it impact them and make work better for them? And I think that having your philosophy as a company is really critical. I know Randstad a couple of years ago, announced their philosophy of. We will not use AI to replace humans inside of our company.
[00:43:28] It will only be used to augment human capability. So what is your company's perspective? Policy guidelines, all of this with AI and that really needs to not be an area of non clarity. Before you start rewiring workflows and changing things, you really need to have that sort of leadership.
[00:43:50] Level, understanding what the primary goals are and what we're all together solving for as the organization right. As we move through this, because you're, the exciting thing is you have an opportunity to stop designing to stop designing humans to be better at work. As it exists, but to start making work better for humans, right?
[00:44:14] So we have this opportunity now. I don't think it's gonna be paused. Let's redo all these workflows. I think it's gonna happen more organically, especially in large companies. Yeah. And you're willing to try things and get customer feedback. So for example, the automation of the front end, one company I was at.
[00:44:29] Major customer backlash because they do like there to be a human option for checking out where they're
[00:44:35] Having their groceries bagged by an employee. They're having conversation with that employee instead of scanning their own items. So we can't predict what customers will expect. For example, we had curbside pickup, which was emanated during COVID we all thought, and the companies I worked at, oh, okay, this is a COVID thing and it'll go away once COVID.
[00:44:55] Reduce once we can come back together in the store. No, customers love that. They wanna pull up to the curb and have someone bring them their stuff. So that stayed. So it's really important, and I say this to leaders at all levels, you really have to know how the work gets done. Before you can make decisions about your AI policy, you gotta understand those nuances of how the work gets done in your organization.
[00:45:17] Tom Griffiths: Great points. And one common thread around the iteration and incremental approach to starting with a set of claims and then trying to push the boundaries or starting with a new version of a service and learning and adapting. And, not biting off too much or being overwhelmed, but just experimenting, learning, iterating.
[00:45:37] Really helpful examples. Thanks guys. So we're into our final stretch here last few minutes, and we do wanna make sure that everyone who joined and gave us their time walks away with as much tactical tips and tricks as possible. So we're gonna fire up our final poll. Anu, if you can just get this ready so we can hear from our audience.
[00:45:58] What's stopping you from greater AI readiness? And we can speak to a few of these barriers in a moment to hopefully help you through them. What's stopping you from greater AI readiness? Is it fear or overwhelm about transformation? Time constraints, guidance from leadership, or other approval structures or enterprise barriers?
[00:46:21] Give us your answer and in a few seconds, ANU will pop that on the screen.
[00:46:29] What have we got? Time constraints for experimentation. Very clear winner there, guys. What are your recommendations for that?
[00:46:38] Markus Bernhardt: That's the obvious issue of any change management we've ever seen in any organization.
[00:46:43] Tom Griffiths: Right.
[00:46:44] Markus Bernhardt: Busy doing the job. We don't have time to do x. Yeah. And replace X with anything.
[00:46:50] Tom Griffiths: Yeah.
[00:46:51] Markus Bernhardt: Upskilling auditing reviewing writing, writing performance reviews, and writing three year performance plans. All of those things that people have come up with. And so we're, we don't need to reinvent the wheel here. All the basics of good change management still apply and we need to help and guide people through this process.
[00:47:10] And if we don't make room for it, there won't be room. And I think the winners will show that making room has worked and those who refuse to make room might sadly be falling behind drastically.
[00:47:23] Brandon Carson: I remember I was at an airline and my team supported the frontline operation around the world, and we were told like, let's make better frontline leaders.
[00:47:34] We need some leadership development. There wasn't a whole lot available for the frontline leader. And so we had to put together, we, we started working on putting together. What we thought would be a really good program, but, and then the business was like, this is great, but you don't have time to take them out of the operation to put them in classes and this kind of stuff.
[00:47:53] So the business wants better leaders. The business isn't willing to give the time to make better leaders, right? And so it's a conundrum and we're all faced with this. To Marx's point, it's like time to learn is probably. The number one most critical challenge the organization faces, right?
[00:48:13] And so our answer for, the airline was, we built this 15 month MBA type of program, and we made it hybrid, but we infused in the work itself, instructing and helping to help leaders become better leaders as they do their work, right? And so was a little bit of a, a challenge, but the biggest unlock with this for us was to leverage AI coaching in the moment for our frontline leaders.
[00:48:45] And we saw instant results on better communication, better connection during one-on-ones, better thinking about what their when they show up and how they exhibit values and how they lead and all of that. So I would say that time to learn you're, it's gonna be a really tough negotiation with the business to get more time for learning.
[00:49:10] So how can you get creative and infusing building development in the actual work itself as they're doing it, and look for those kind of faster ways, if you will, or faster opportunities to build capability. I would say for leadership development, the biggest unlock is coaching. You've got a lot of opportunity and a lot of options for you now on how you can.
[00:49:31] Scale coaching right now? A hundred percent.
[00:49:34] Tom Griffiths: Yeah.
[00:49:34] Brandon Carson: Just wanna No
[00:49:36] Tom Griffiths: thanks Brenda. And yeah, it brings it full circle. Certainly our foundational belief here at Hone is that. The world is changing increasingly rapidly, but people's skills are still gonna drive the business and leadership is the critical one there, but the good news is that AI can help us develop those skills in a new way, in the flow of work through AI coaching, AI role play, AI upskilling experiences.
[00:50:01] And so folks are curious about that. We'd love to show you the latest of what we've got. We'll follow up after the webinar. But we're right about at time. Guys, thank you so much for a really amazing, deep, wide ranging conversation. Tons of value there for the audience. Really, thank you for all of your insights there.
[00:50:18] We appreciate it.
[00:50:19] Brandon Carson: Yeah, thanks. Great. Thank you for having me.
[00:50:21] Markus Bernhardt: Thanks for hosting this, Tom. Yeah, this has been enjoyed.
[00:50:23] Tom Griffiths: Absolutely. Thank you everyone for joining. Have a great rest of your day.
[00:50:27] Brandon Carson: Bye-bye.