AI Hype vs. Reality: Finding the Middle Ground for L&D with Alex Khurgin

What's covered

Join Tom Griffiths and Alex Khurgin, learning experience designer, co-founder at PeopleHat, and AI fanatic, as they dive into the world of AI in L&D. They discuss AI trends, the value of experimentation, and practical tips for collaboration with AI in the workplace. Learn how to stay ahead with AI tools and implement AI effectively in your organization.

About the speakers

AlexKhurgin

Alex Khurgin

L&D leader and co-founder at PeopleHat

Alex Khurgin is a strategic and creative learning leader with over 17 years of experience in the field. He owns a learning consultancy with clients that include Capgemini, IBM, Better, Braze, Casper, and dozens of others. He has also been among the first hires at three cutting-edge learning technology startups that each grew from a handful of people to hundreds of employees with multiple Best Place to Work awards. His work has been featured in many top L&D and training publications, including ATD, Training Industry, and Chief Learning Officer. His sessions are also routinely among the most-attended and highest-rated at conferences such as Techweek, DevLearn, ATD, and CLO Innovators. He uses humor, philosophy, and storytelling to engage and inspire audiences, while guiding them to practically apply what they are learning back in the real world.

TomGriffiths-1-300x300-1-1

Tom Griffiths

CEO and co-founder, Hone

Tom is the co-founder and CEO of Hone, a next-generation live learning platform for management and people-skills. Prior to Hone, Tom was co-founder and Chief Product Officer of gaming unicorn FanDuel, where over a decade he helped create a multi-award winning product and a thriving distributed team. He has had lifelong passions for education, technology, and business and is grateful for the opportunity to combine all three at Hone. Tom lives in San Diego with his wife and two young children.

Episode transcript

Tom Griffiths: This is Learning Works, a podcast presented by Hone. It's a series of in depth conversations with L& D experts, HR leaders, and executives on how they've built game changing learning and development strategies, unleashed business growth through talent development, and scaled their global L& D teams. Tune in for the wisdom and actionable insights, the best in the industry.

I'm Tom Griffiths, CEO of Hone. Welcome to Learning Works.

Welcome listeners. I'm happy to introduce you to my co host today, learning experience designer, Alex Kurgin, who's been in the field of L& D for over 15 years, been a pioneer of micro learning and live learning, and is now AI obsessive. 

Alex Khurgin: Thank you, Tom. I'm really happy to be here. So I, like many other people since ChatGPT came out, have been completely engrossed in AI.

I just can't talk about anything else. It's really annoying. And I've learned in the past few weeks after getting some sharp looks from my wife to stop randomly bringing up AI in like dinner conversations with guests who are just trying to enjoy their meal. But this is actually a perfect outlet and you're the perfect host for this for me, because You have a really deep background in machine learning.

So you have an MA in computer science from Cambridge. You're a PhD student in machine learning at the University of Edinburgh. And you've finished your MSc degree in machine learning. So you obviously have a really deep background in this stuff. I know you haven't done in a while. But yeah, can you just, to get us started, can you talk a little bit about what you were studying in your graduate program and your experience with AI and machine learning?

Tom Griffiths: Yeah, absolutely. I mean, since I was a little kid, I was really into computers, really into robots and then AI. And so, yeah. Yeah. One. To study CS at undergrad, cause that was the pathway into the field. And then did a master's and a part of a PhD at Edinburgh, which was one of the leading places to study machine learning back in the mid 2000s.

So we're talking about, you know, 18 years ago or so at a very different time for the field. In fact, a time where the term AI was almost taboo. Because AI had kind of come and gone as a fad a few times and ended up ultimately disappointing through the sixties and the eighties. And so people just prefer to call it machine learning and not use the AI monicker, that's obviously all changed.

Now, what we were working on then was a different branch of the field to the one that's led to all these breakthroughs recently, but it was more in the kind of Bayesian probabilistic networks space, which is a different branch. It turns out that shortly after I left the field, dropping out to start companies, and that's what led me to FanDuel and then to Hone.

So going way back when I was doing the research, I was really enthralled by the idea of computer vision. We had very primitive models back in those days relative to what we have today. And, you know, the task might be identify a digit from handwriting, or can you tell a difference between apples and bananas?

Whereas now we've got some incredible breakthroughs through integrating multimodal approaches and models. And there's some great demos online of that, but I think there's some actually really interesting applications to the workplace and how we do L& D. So Alex, you know, you're up on the latest here. Can you take us through what's happening and how we might use that in the work that we do?

Alex Khurgin: To be honest, before ChatGPT came out, if you were describing your kind of journey with machine learning, I would have nodded very nicely and maybe my eyes would have glazed over, but now I'm genuinely really interested. And, uh, it's amazing how this, Kind of taboo subfield is all of a sudden dominating people's thoughts who have really no background in this.

So the first one is actually related to your background in computer vision. This is the first topic that we want to talk about. So the release of GPT 4V, which is OpenAI's vision based model that can see and hear and speak. It also browses the web again after OpenAI suspended that feature for several months.

People are already using it to do things like avoid parking tickets, so you can take a picture of some incredibly complicated set of parking signs and just ask, am I allowed to park right now? And GPT 4V will tell you yes or no. Some people are using it for Uh, probably better things for society. So transcribing prescriptions, providing summary of medication dose, dosage and so forth, doctors are known to have chicken scratch handwriting for some reasoning.

Uh, now GPT 4 V can help with that. And then getting into use cases that our listeners are probably more familiar with. So things like analyzing a graph. So if you have a chart, if you have any kind of visual, you could analyze it, you could forecast things from it. One, another kind of use case that seems like universally usable is the ability to generate code from an image.

So you can take a picture of a whiteboarding session and ask GPT 4 to just code up the, the UI for that, for that image in whatever language it is that, that you're using. And I started to think about how this might apply to L& D. So one weird idea I had that I wanted to try is to take a picture of a job aid.

And use GPT 4 to create something similar, but for a completely different skill. So if I like what a particular cheat sheet looks like or something out there in the wild, I think this is a good resource. Can I take a picture of it and see if I can get something similar for something completely different?

So this is kind of a weird idea, but I think it's really important because what it shows is. The value of tinkering at this time, because we're in this kind of in between time right now, we all find ourselves, we're inhabiting this incredible, but imperfect AI world where we have all these tools, but they're not deeply embedded in everyone's workflow.

And so I've been thinking a lot about the value of tinkering now, and. What's happening is you have all these AI tools that make a lot of mistakes, they hallucinate, they don't work exactly like you want them to work. And there's this unique opportunity for people who are willing to put in the work, because if it's hard or weird, that means that someone else is going to stop, but you don't have to.

So that means individual people can rapidly accelerate their craft, just by being really committed to using these tools and being willing to. Work through all these kinks and over time, this is going to go away, right? A lot of people talk about prompt engineering, how valuable it is right now, but you probably should anticipate within a few years that the I tools we're all going to be using are going to be customized for each of our roles.

They'll be embedded deep into HR project management systems, our communication systems, and we're going to need to provide much less context for AI tools to. Help us, because they're already going to know everything about us, about our co workers, about the work that we're doing together. But for now, it seems like tinkering is really important, and you get efficiencies, little efficiencies here and there, and people are starting to come out ahead.

What do you think of that, Tom? Have you been tinkering yourself, or what do you think this kind of in between time, what effect it's going to have on the workplace over the next couple years? 

Tom Griffiths: Yeah, I mean, I think it's a great way to characterize it as tinkering right now, and every big technology wave has a tinkering stage, right?

If you think of the home computer revolution, there were tinkerers in garages making those early models. And the early web, there were tinkerers kind of building websites and seeing what was possible. And then, you know, those projects developed into fully fledged, massive companies. The same thing is happening here.

And the sandbox is a nice visualization of that where people are tinkering and you know, we'll see how it can help people in their day to day work, how it can enhance products, how it can be the basis for brand new products. I think for me as a CEO of an existing company. Where we teach many different businesses, many different skills, it's a really interesting time because no doubt there are lots of ways to harness the amazing breakthroughs that we've seen recently, but that doesn't, it doesn't feel like it's going to come from top down.

Like, I've got a bunch of ideas, so does the exec team, but what I'm actually pretty excited to do is to empower the team. To tinker away, we need to create the sandbox so that it is safe to play in terms of proprietary data and personal data, but once we've created that, what we're allowing people to do is just hack together ideas and thoughts and show us what's possible in that tinkerer mindset.

So I think there's a way to bring tinkering into. a company and then, you know, may the best seeds of ideas win and how we can develop from there. 

Alex Khurgin: Yeah. I feel like from, from an L and D perspective, it really helps if you give people license to do this. And so, you know, a lot of people might not be totally comfortable with these tools.

They might be really immersed in their work and not really thinking like, okay, how can I take the tasks that I'm doing and potentially use AI to do them more efficiently or better. And it makes sense that from. L&D, HR, kind of people leader perspective, you can come in and say, you know, what we really care about is your growth and you are the closest to the tasks that you do.

And we're giving you the license to try to use these AI tools, assuming that you're doing them in a safe way. You're not exposing customer data and so forth, and that's a big important part of it. And so working with it to develop. Some ethical guidelines and just safety and security guidelines for the company.

But within those guidelines, helping people think about their capabilities is something that's malleable and something that can really be improved by playing around with AI. And as you mentioned, taking a bottoms up approach. 

Tom Griffiths: Exactly. And we hear all the time, like every day about the L& D teams that we serve being extremely stretched when it comes to resources and having so many team members to support so many learning initiatives that they'd love to be doing.

They just don't have the bandwidth. And so, if there are ways that learning professionals can get leverage on their time, maybe it's even just 2x. We'd love 10x, but just 2x would allow them to serve, you know, twice as many people, or, you create twice as many programs or get leverage from the time in other ways.

And so it's incredibly exciting, I think, to be experimenting in L& D on the front lines so that you can fulfill your purpose and your mission more effectively for more people. 

Alex Khurgin: We can say that up until now, the kind of story of AI seems to be a lot of individual dedicated people. Picking up little efficiencies in their work and in the overall scheme, these can really add up.

So, you know, the top down portion of it is giving people license to, to play around and there's now real indications that this kind of approach creates significant improvements in people's work, which brings us to the second topic I wanted to focus on today, which is that there's now a lot of research showing that AI significantly boosts knowledge worker performance.

Productivity. Specifically, there was a study done by a multidisciplinary team from Harvard, MIT, and UPenn. And they observed 758 consultants working at Boston Consulting Group, which is one of the three big management consulting firms, along with McKinsey and Bain. And 758 consultants is like 7 percent of their workforce.

So it's a pretty sizable chunk. And what they found in the study was that for 18 different tasks, that were selected to be realistic samples of the kind of work done by those consultants, those who were using GPT 4 outperformed those who did not on every single dimension they studied. So I was curious about the tasks and they were legit.

There are things like, okay, generate ideas for a new drink in markets that are underserved. Pick the best idea and explain why so that your boss and other managers can understand your thinking. Describe a potential prototype drink in vivid detail. Come up with a list of steps needed to launch the product.

That was just one type of task. Another task was generate ideas for a new shoe aimed at a specific market or sport. Uh, that is underserved use your best knowledge to segment the footwear industry market by users. So you had all of these varied kinds of tasks, which are seem like legit consulting tasks.

And the big takeaway for me, seeing that this kind of wide scale improvement across all of these different tasks is that, okay, this is confirmation that AI can in fact, help people do their work better or more efficiently. 

Tom Griffiths: Right. This is not the AI doing the tasks. This is them collaborating with the AI.

Alex Khurgin: Exactly. Yeah. This is these consultants. And I think, I'm not sure if they actually looked at like just the AI doing the task, but I don't think for these tasks, knowing what I know about GPT 4, it could outperform a human plus an AI in this case. The headline finding for most people seem to be that in the study, AI boosted low performers up, well above average.

But help top performers less. So a lot of people seem to be thinking that, okay, LLMs are just going to raise the baseline of work quality in general. And the people who are going to benefit most are those in the bottom half of skills or experience currently, which is incredible. That's this like great egalitarian opportunity.

But I think the truth is more nuanced because it seems like, so yes, across these 18 tasks, this is what happened. But I think In just my experience, having thought about different kinds of tasks, what I think is going on is that for roles or tasks that depend on standardized knowledge for which it's relatively easy to achieve peak performance, LLMs aid the most inexperienced people.

So, like, for a customer support inquiry. The perfect for a lot of them, the perfect response is relatively easy to achieve. Someone has a problem that you've seen many times before and you solve it for them. And in those instances, a support rep can use an LLM that's trained on the company policies to come up with that perfect solution, even if they're a novice support rep.

And so for this task, it's solving customer routine concern. It levels the playing field. But I think there are lots of other tasks that depend more on tacit knowledge. There's high difficulty in achieving peak performance. There's generally a lot of room to improve. And in those tasks, I think LLMs uniquely accelerate experts.

For example, continuing with the customer support. Idea, if you have a really complex customer support inquiry, where someone's like really emotional, they're asking for something that the company policy doesn't allow. They're maybe asking for something you've never heard of before. It's unlikely that this kind of scenario is going to go well if you have a novice that's just working with an LLM, because the novice is not going to be able to discern which of the recommendations from the LLM is best or makes sense.

They're likely to be overwhelmed by the emotional state of the customer. Whereas an expert might be able to calmly look through some generated recommendations and make the right judgment call. What do you think about that split and how, like, how relevant do you think this is for how we think about LLMs aiding tasks at work?

Tom Griffiths: Yeah, no, I think it's right. The way that the models work, of course, is that they're trained on many millions of examples of similar things and they kind of build associations and probability relationships between Tokens, and then those kind of roll up into high level of abstractions. And so they're best at things that are similar to what they've seen before.

And so mapped to a work task, if there's lots of examples of successful customer service, then yes, they can perform well on that. I think the expert can get leverage if they know how to use the tool, though. They might not have a lot of patterns from in. Decide the company for a particular novel HR scenario, but an expert could say, well, this is a bit like something else out in the world.

And so they can mine or probe the deep trained model to get similar responses and use their human judgment to decide how those external things can apply to an internal task. So I think it gives both. Leverage, but you're right. I think for more novel tasks, you need to have a higher degree of skill and judgment to make them work for you.

Alex Khurgin: I think another kind of underappreciated aspect of this is just willingness to use the LLM. And so the people, it seems to me who are like, and this goes beyond expertise or the difficulty of a task. It's just simply going to be mechanically true that. If you're willing to use the LLM and you're also willing to work through and tinker and deal with some of the challenges that come with using LLMs now, given some of their deficiencies, if you're willing to do that and to put in the work, that's going to be really the largest determinant of the value that you get from AI tools, because if you're not using them and if you don't make a habit out of using them, then you're going to have limited value, no matter how much of an expert you are, no matter what kind of tasks that you're working on.

Tom Griffiths: Yeah, and I think that's where it comes back to tinkering, because these tools are so new, it's not clear on the outside of the box, in some sense, what they can be used for. You need to experiment and play around a little bit to see the kinds of scenarios or use cases they could have for you and your particular kind of work.

I think one of the reasons that I've pulled back personally from, you Using them for everything I can possibly think of is the hallucination or inaccuracy concern. And so I'm curious if you've thought about or seen advice on how to quality check what you're getting out. 

Alex Khurgin: Yeah, there are different strategies.

One is just to think about what are the tasks that I have to do where it doesn't matter. Whether something is true or not. So I have specific examples of that with L and D where if I'm trying to come up with a case study in a class, it's perfect. I don't care about accuracy because I'm trying to come up with a fictional.

And so being able to tell an LLM, come up with an example in which an executive who's really well meaning, Is trying to drive their team to achieve a goal, but they're doing it in a way that isn't agile and it's more of a command and control example. And so I can use an LLM to very quickly generate some baseline example and then tinker with it from there.

So thinking about things where fact doesn't matter is really important. Another thing is to kind of index on how much expertise you have in something. So I find when I'm using LLMs. In fields where I do have a lot of tacit knowledge already, and I'm able to discern like, this is a good answer, this is not a good answer.

And this speaks to why for some tasks, if I have that expertise, the hallucination doesn't matter as much to me because it's easier for me to spot. But absolutely, You know, as we saw with, there's a famous example at this point already, where a lawyer cited all these cases that didn't exist and was admonished by the judge and people cite research studies that, that aren't real, that mention real researchers, but not real research papers that they ever wrote or real findings.

So you have to be really careful about that because LLMs can be really sycophantic or they're just trying to give you what you want. Like, I'm looking for research that says this. And they're like, well, yes, sir. I'm going to give that to you, even if it doesn't exist. So one of the things I do actually is in my custom instructions for ChatGPT, which apply to every single prompt that you create.

I write, don't be a sycophant. Don't just tell me, don't just agree with me for the sake of agreement. Don't just give me what I'm looking for if it's not true. And indeed, there's research showing that when you're telling an LLM how to behave and how to think and how to answer things, you're going to get different kinds of answers.

So those are just some ideas of avoiding hallucinations. But yeah, there's you're going to have to do quality checks, and there may in the future be people in the organization whose responsibilities are more. About checking accuracy of things, like we see now at the New Yorker, they don't like just let anything go through.

They have a rigorous fact checking process, but yeah, maybe we'll have to do that with other kinds of organizations and other kinds of work that we never imagined. 

Tom Griffiths: Yeah. Yeah. Another. Approach that can help is citing sources. And that's not necessarily as straightforward as it sounds because a lot of the models are aggregating lots of different sources.

Uh, but what's the, the latest on how well ChatGPT or others can cite their sources. 

Alex Khurgin: So Bard, I think does a pretty good job of linking to direct Google results. So it'll give you an AI answer. I've also found like Bard's answers seem to be directly plagiarized from some specific article that you can then link to, which is the whole other thing, whereas I feel like in a lot of cases, ChatGPT is generating text that if you were to Google that exact text, you wouldn't actually find it in Google.

And I know that ChatGPT now has the ability to browse the Internet. So I feel like that means the citations are probably going to be more accurate. But I've just come down to, like, anytime I'm looking for specific research, I don't just ask, what is the research and have it sum up, I ask it for specific citations.

And then I go and find those actual papers. And I found the hallucinations there to have been greatly reduced. I think one other way to think about hallucinations. Or to think about avoiding them is using a model that's trained on your own data, where I think if you use something like Azure Cognitive Search and you have it ingest all of your company's data and anonymize it, then you can basically get zero hallucinations, uh, where you have a closed set of data and every answer that anybody gets is just based on Some official data that has been created by your organization.

And more and more, it's going to be the case that we'll see companies having their own instances of GPT that their employees can use where they don't have to worry about fact, basically. 

Tom Griffiths: Yeah, I think that whole model is. The future certainly in existing companies where the model isn't a competitive advantage because anybody can get the model.

There are open source versions of these models. In many ways, it's the data that is proprietary to you that can give you a competitive advantage. And so if you're training your model on your proprietary data and creating value for customers out of that, then that's what gives you the competitive advantage, which You know, it's why in many ways, many of the breakthroughs in this recent wave benefit the incumbents like a Google or Microsoft who have the data sets to really have a proprietary advantage on the training side.

Certainly at Hone, we're thinking about all these amazing classes that we teach. We've got many of them kind of transcribed. And so. We can use AI to analyze what makes a great class or what might someone want to go deeper on, or what are some of the trends or themes across companies or within companies that are coming up in training that people need help with.

And so it's exciting to think about as a company, what is your data set that's proprietary and how can we get leverage out of that? 

Alex Khurgin: Yeah, it's the way you talk about it. It seems like. There are these different fronts of AI advantage where there's the individual tinkering front, which is really valuable right now.

There's the kind of L and D front, which is being aware of all the AI capabilities and helping people turn those into habits and getting those things into people's workflows. And then there's the kind of organizational strategy front around what is the data that we have that's going to enable us to have a durable competitive advantage.

Where we can train our models on this data that no one else has access to. And it's interesting that you mentioned Google. I'll try to say this without revealing almost any information. So a member of my family or a friend, I won't specify, has been offered. Many millions of dollars for his company's proprietary medical data from Google.

So these juggernauts don't just have their own data that they're working with. They're looking to buy more and more data because they realize that this is the ultimately the most valuable front. And after the kind of tinkering window goes away after a few years, data is going to be truly the most important source of competitive advantage.

Tom Griffiths: Yeah, agreed. 

Alex Khurgin: So here was another interesting finding from the paper. So this paper on the Boston Consulting Group also talked about the so called jagged frontier of AI, which is meant to as a metaphor for the wildly different capabilities that AI has, right? So you mentioned Individual people don't know in advance what AI can do.

In fact, the companies that train these models don't even know, as far as I understand, with GPT 3, OpenAI did not expect GPT 3 to be able to code at all. It was just this like crazy emergent behavior that they didn't expect that occurred because the model was trained on all of this. Code as text and so there really is this kind of wild playground or jagged frontier for capabilities and there are different ways of kind of dancing along that jagged frontier and I, I was surprised to see this kind of poetic language in academic research.

The researchers talk about two different kinds of ways of collaborating with AI or two different kinds of AI workers. So there are centaurs who break down tasks into subtasks. So they assign some of the tasks to ai and they do the rest themselves. So there's this like clear demarcation in the way. The center is half man, half horse, and you could see where the human ends and where the horse begins.

There's this clear DeMar demarcation between the work. And this was true that some people who were doing those tasks, uh, like trying to come up with a new kind of shoe for a d for an underserved market and and so forth, they did those tasks in those ways. Another model is the cyborg model where people go back and forth between their work and AI and they use AI for every task and whatever they get out of the AI, they question its output, they tweak the prompts and then the end set is this set of tasks that they did where it's kind of unclear how much of it was human, how much of it was AI.

It's just this big jumbled mess. And both of these styles of collaborating with AI can be highly productive. And I really have no sense of when someone might be more of a centaur and when someone might be more of a cyborg. Uh, which one ultimately makes more sense for, for what kind of person or what kind of task.

But there really is this distinction and I, I think it's going to be an interesting thing to observe. Again, as people tinker, what the process is for them and how it kind of shakes out and what the science becomes around, you know, The best ways to collaborate with AI tools, what, what is your initial gloss on this just from seeing these two different categories?

Tom Griffiths: Yeah, I mean, it sounds like there's a spectrum, right? It's not bifurcated as cleanly as this. Some people may be the extreme center who has very demarked tasks. For the AI and then others could be a mishmash of like revisions and then inputting a bit more and maybe they do a revision of the passage of text that the AI does and put it back in.

At what point do you flip from being a centaur to a cyborg? I don't know. I think it's just a useful spectrum to think about. I imagine that as people get more comfortable with the capabilities of the AI and the tools that most people would trend towards cyborgs, just where there's. Not a clear distinction between their work and the AI's work.

That feels like the most efficient, fluid, productive way to do it. Yeah, sure. There'll be some tasks that you can chunk off, like make an image. I'm not going to be like making some of the image. It'll be the AI making all the image, but. It just feels so interactive, which is one of the beauties of this modality, that I think most people will end up there.

So, Alex, I mean, thanks for a great conversation on a bunch of ways that L& D can leverage AI or workers in general can leverage AI. What would you recommend for L& D or HR leaders now to Stay ahead with AI and find new ways to get leverage in their roles using these amazing tools that are coming out every week, it seems.

Alex Khurgin: So I have one specific recommendation and one general recommendation. So the specific recommendation is to try out this incredibly fun quiz. It was very fun for me. And if you're interested in AI capabilities, it will be fun for you, but that's the filter there. There's a there's a quiz that helps you calibrate your knowledge around what GPT 4 is actually capable of.

So the way it works is it asks you a question and it asks you, can GPT 4 solve this problem? Or will it give you the right answer? And then you indicate your confidence level. Like I'm 95 percent sure that GPT 4 Can solve this problem or do this task and then you see whether you were correct and you can compare your calibration to other people's calibration.

I found this to be a really illuminating exercise to understand like, oh, wow, there are these capabilities. I had no idea about and I thought it was actually really good at this, but it's not good at this. So just doing this kind of quiz. Is a great way to calibrate your own knowledge. So, so that's one specific recommendation.

The more general recommendation is to break down your tasks as an L& D person. Let's say, so if you're an L& D leader, you're responsible in some way, probably for building your learning team and setting goals for the team, analyzing needs, rolling out programs, measuring, learning, all of these different things.

And there are people out there in your role who have been trying to do these things using a I and they love to talk about it. They've been talking about on linked in. Are they writing a blog post or maybe they have a tool that they created? They're a vendor and their tool actually lets you do these things.

So just going out there and searching for what people are doing with those specific tasks. And using your knowledge of GPT 4's capabilities, going out there and tinkering and trying things out and really methodically thinking about, for each task, how can I do this better using an AI? I mean, you can go a level further than that.

After you do it for yourself, you can help individual teams do that. So helping each team leader or every individual list out the tasks that they work on and to think about, how can I be a cyborg or a centaur with this task? To do it more efficiently. That's great. 

Tom Griffiths: You have a deep background in L& D and have gone deep on AI, but you don't have a technical background in AI or machine learning.

So what resources have you leaned on the most to learn about the field? 

Alex Khurgin: Yeah. So there are a few in particular that are probably the ones that lean on the most. So there's something called the 80, 000 hours podcast, which is. These long in depth interviews with people who are building all of these new and incredible AI tools, and also people who are trying to keep those tools safe.

And sometimes those people don't overlap and it's kind of interesting when they don't. But that podcast is great because it's talking to people at OpenAI, at DeepMind and so forth, who are actually building these tools. There's also a guy named V. Mauschwitz, who was a former Magic the Gathering champion.

But I know him because he has a substack where he does this incredibly in depth weekly AI roundup that's called Don't Worry About the Vase. And I think I kind of resisted him for a little while because I'm suspicious generally of anyone who bills themselves as a thought leader, you know, as a LinkedIn influencer, but, uh, Ethan Mollick, uh, who I think a former guest, uh, talked about as well.

He really knows his stuff. He's a professor at UPenn and he also has a subsec where he goes in depth into all of these topics. And I think those are probably go from most to least technical. So the most technical stuff comes from 80, 000 hours. And if you want to get really technical, the website that has changed my life, I think the most in the last year is called lesswrong.

com. And it's where like the smartest people in the world argue with each other in purely rational terms about the future of AI. So it's this really vibrant community of like credibly intelligent people. And yeah, because I have this deep L& D background, I've been doing learning technology for 17 years.

And for most of that time, I've been focused on workplace learning. I apply that lens to anything I'm learning. So I come away, I think, with a unique background. 

Tom Griffiths: That's great. And, you know, it sounds like some fantastic resources there, but I also know you're diving in and playing and tinkering with the tools as well.

So, you know, what are some of the playgrounds that you've been using the most? 

Alex Khurgin: Yeah, I'm kind of basic in the sense, I like my brands. I like Jordan brand and I'll eat a particular cheesesteak or something, but I've tried out different stuff, but I think UBT is the best. GPT 4 is just the most powerful model out there, and that appeals to me.

I'm happy to pay, you know, 20 bucks a month to get access to the smartest AI out there. It's weird to me if I'm going to be using AI to not use the most powerful one, which I think my perspective is probably similar to a lot of people's. And I think it tells you where we're heading with things, where we're always going to be trying to get more and more capabilities and whatever is the most capable thing that's going to be most attractive to people.

And so. There's a lot of incentive to build a more capable model than GPT 4, but yeah, GPT 4 is great. I've tried out Bard and Claude and a bunch of others and yeah, for other kinds of capability reasons too, like the voice mode that came out recently, GPT 4 Vision. They're really ahead of the curve for the time being.

Fantastic. 

Tom Griffiths: It seems like overall AI is going to be a big boost to productivity, but have you seen, or are you thinking about any instances where AI might likely hurt productivity? 

Alex Khurgin: There are. Examples I've seen from radiology studies, let's say, where they found that in general radiologists performed better using AI than those who didn't and also perform better than AI alone.

And this is true for many kinds of tasks, but there are instances where Using AI leads you to perform worse because you mistrust the AI. So if you don't have inherent trust for the tool that you're using, whatever recommendation that tool is making, you're subjecting it to a kind of scrutiny that ends up slowing you down and worsening performance.

And so if you're not calibrated to like how good the tool actually is, it just kind of interrupts whatever process you might be running. So trust is a really important part of it, especially if you're a learning leader. If people don't trust that AI can help them, or if they've tried to tinker and they came up short.

So Tom, are you familiar with that? With the meme of the miner who's like mining for a diamond and he like gives up with a few inches left where all the diamonds are, but he just gave up right at that point, you know what I'm talking 

Tom Griffiths: about? I haven't seen that one. 

Alex Khurgin: Well, I'm sure our audience is more online than you are.

You know, I In general, that might not be the best lesson for people because the sunk cost fallacy is real. And oftentimes it really is actually just better to cut your losses. But using AI is not this like one time thing you're trying to do. And if it doesn't work out, then you might as well cut your losses.

It's this thing that people are going to be doing continuously for a while. And it does make sense to figure out. And in the meantime, you may waste some effort, but it's worth it. And so if you're an L and D leader, helping people get that trust is going to be really important. And helping them kind of work through any challenges or roadblocks that they're facing By maybe doing some kind of training or just helping people learn from each other 

Tom Griffiths: Yeah, and as you're thinking about people getting familiar with where they can and can't trust the tool.

What's a good mental model? I've heard people compare it to now you've got a direct report on your team or a new team member and you need to spend time with them to see where the strengths are and kind of the edges or the issues are. Yeah. Is that the way we think about it? 

Alex Khurgin: Yeah, I do think so. I think there are a lot of practical benefits from viewing an LLM as an employee or as a teammate.

In the sense that you actually get better performance by using prompts that are like, let's take a deep breath and let's work through this step by step. So in, in one sense, the LLM for some tasks is kind of like your intern or your employee for other tasks. It's more like your teammate. That you're basically even with and you're kind of collaborating with and then for other tasks, it's like the super genius that you're asking a question and you're graced with its presence.

But in all of those instances, it helps to be polite and professional, actually, and. It also, I think, puts you in the mindset of coming up with better prompts. Like if you're thinking about the tool as this robot that you're like, I just want to speak to an agent and you're cursing at it and stuff. You're probably not thinking in the clearest terms.

You're probably not coming up with good follow ups. And so it's a kind of magic thing that happens where you do kind of view them more as a person. And maybe that's going to be a problem down the line that we'll almost certainly have to deal with. But. In the meantime, doing that little magic thing, I think has a lot of practical benefits.

The tool just works better. 

Tom Griffiths: That's a great tip. Curious if you think there are other things that people might be getting wrong about AI. It's in the media a lot. There's a lot of hearsay and stories and conjecture. What do you think people are getting wrong? So 

Alex Khurgin: I've seen some friends actually post their intentions of being a prompt engineer, saying that they wanted like a prompt engineering job.

And I had these assumptions too when I was first learning about AI, but I think what's already kind of happening, if you look at AI implementations in specific tools that people use a lot, so like Salesforce, you're interacting with an AI by clicking recommendations. Largely, there are some prompt engineering that goes into it, but a lot of it is already kind of baked into the product and the workflow.

And it makes sense that tools that have a lot of people on staff who are trying to make it as usable as possible are going to probably give you more recommendations than give you this kind of blank interface that lets you do prompt engineering to get way more use out of a platform than other people.

Products want all of their customers to do well. So I think in that sense, prompt engineering doesn't really make sense as a future state. And then further. Insofar as prompt engineering is important. I think it does kind of get cannibalized by a few people in the organization. So Salesforce Einstein, for example, has these templates that you can create.

So you can get your savviest salesperson to go in and put together some templates that everybody else uses. And so this just general scale of prompt engineering doesn't really make sense to me as something to index toward. That said, I do think it's really important. Prompt engineering still matters now, but I think like the future is.

Just thinking really clearly about what it is that you want and maybe not using that as input, but more using it as a way to evaluate the output. So an AI is going to do something that then you look at with your judgment. And if you're clear in your judgment and in your values, in your standards, then you can look at that output and identify if Something needs to be changed or edited or whatever it is.

And I think that's the more valuable skill and where people should be focusing on. 

Tom Griffiths: Yeah, I agree with that. Prompt engineering seems more like a skill that you bolt on to your existing responsibilities rather than a pure play dedicated role. Like if I'm a salesperson, as you said, or I'm a law clerk. I need to learn how best to use the tool and interpret the output, but that's just an extension of my existing role.

So it's like a skill based thing. So to bring it back to L& D, how do you think a L& D leader now should go about experimenting with or implementing AI in their workplaces? 

Alex Khurgin: I think there are a few pathways for experimentation, and I don't know which one makes sense to start with, but I think all of them are ways of thinking about it.

So one is. a task centric approach where you help people identify what are the tasks that you do? What percent of your time do you spend on each of those tasks? And how much time could you save? A lot of people might not know the answer to that, but there are ways of answering that question. So you can literally Google task plus AI.

So, you know, training needs analysis is one of your tasks. You're an LLD person that probably is to some extent. You can look up AI training needs analysis, and you'll get an understanding of how much of this task can actually be accomplished with an AI, and then you can get each individual, or maybe if you're an LLD leader, you're maybe just talking to department heads.

Initially, you're understanding training. What are the opportunities where we can deploy maybe a single AI tool to meet as many of these opportunities as possible, because we know how much time we could potentially save. And then as part of that exercise, helping people think about where else could you devote your extra time and energy if you were saving time doing these things.

So that's the task centric way of thinking about it. Another way of thinking about it is. Capability centric. So just being aware of the capabilities of the large models. So there's vision and voice and hearing and cross modality, meaning like you can turn any one thing into any other thing. So I've been on the content side of L and D for a long time.

You know, my whole career, there's always a question around what should this be? Should this be a video or audio or a blog post or a white paper? Well, you can now do all of those things. You can turn a book into a game. You can turn a law contract into a decision tree or something. And you can turn a, a blog post into a podcast into this other thing.

And so these capabilities. And you could potentially think about them for each one. You can try to think about vision or voice or whatever, and ask, what are the existing tasks that we do that this applies to, or you can think about what is the new valuable stuff that we couldn't have ever done before that we can now do with these capabilities.

So one of those things is, for example, hone is a learning company. And it would be really interesting if we just could translate all content into any language out there. There's a third pathway too, which is people centric. So you could, for example, ask around or send a survey out and see who's using AI in your organization.

Which teams do they sit on? And let's talk to them and ask, what are they learning and what would they like to share with others? You can also identify people who are skeptical and see the risks. So you might want to talk to them and make sure that what you're doing with AI is secure and wise. And then there are also people who are not just using AI to do their jobs better, but they really understand the technology and they can help develop a strategy for the future of your business and how your organization gets work done.

Being AI savvy is this completely new capability that has come online and It's not necessarily the people who you would expect that have it, and it's not only the people who are currently making the biggest decisions. So there are people in your organization who have this new superpower all of a sudden that can really help you strategically, both in terms of the business overall and in terms of processes internally.

Tom Griffiths: Yeah, it makes sense. And it can create this beautiful connectivity between folks in different departments. connected through this shared interest or expertise and find really interesting applications within the company by getting those different disciplines together. So it's a great point. 

Alex Khurgin: Are you ready for the twist, Tom?

So the twist is, I think there's a fourth pathway, which is looking at the tools. That you're using like Slack, Lattice, Salesforce, Notion, Asana, Google Docs, and all of their equivalents. So meaning all of the tools where work gets done in an enterprise, they all have AI built into them. And then all the big AI labs have enterprise offerings as well.

And if you've already bought tools or built them, you can start by looking at those. And what are the AI capabilities of those that may have just come online in the last couple of months and whether people are actually taking advantage of them. Because you've already paid for these tools, a lot of them just have the AI functionality now built in.

You don't have to pay anything else. And people are just not using those things. So there may be a meeting transcription tool that you're not using. Or there may be a tool, for example, we use Descript to edit. Podcasts and to edit audio in general. And there are all these amazing AI tools that like, for some tasks, cut the task in half or more, or let you do things you couldn't have possibly done.

And so just looking at the tools you have already paid for, it's probably a good place to start as well. 

Tom Griffiths: Right on. That's great. Super helpful. Thanks, Alex. And I just want to say, you know, deep thank you for all the research you've done and just the insights and the depth of knowledge you've shared with our listeners here today.

Hope we can have you back sometime to do another episode. on AI and L& D. 

Alex Khurgin: I would love to be back, Tom. Thank you so much. 

Tom Griffiths: Thanks for listening to Learning Works. If you've enjoyed today's conversation, we encourage you to subscribe to the podcast for our exciting lineup of future episodes. Learning Works is presented by Hone.

Hone helps busy L& D leaders easily scale power skills training through tech powered live cohort learning experiences that drive real ROI and lasting behavior change. If you want even more resources, you can head to our website honehq. com. That's h o n e h q. com for upcoming workshops, articles, and to learn more about Hone.

Tom Griffiths: This is Learning Works, a podcast presented by Hone. It's a series of in depth conversations with L& D experts, HR leaders, and executives on how they've built game changing learning and development strategies, unleashed business growth through talent development, and scaled their global L& D teams. Tune in for the wisdom and actionable insights, the best in the industry.

I'm Tom Griffiths, CEO of Hone. Welcome to Learning Works.

Welcome listeners. I'm happy to introduce you to my co host today, learning experience designer, Alex Kurgin, who's been in the field of L& D for over 15 years, been a pioneer of micro learning and live learning, and is now AI obsessive. 

Alex Khurgin: Thank you, Tom. I'm really happy to be here. So I, like many other people since ChatGPT came out, have been completely engrossed in AI.

I just can't talk about anything else. It's really annoying. And I've learned in the past few weeks after getting some sharp looks from my wife to stop randomly bringing up AI in like dinner conversations with guests who are just trying to enjoy their meal. But this is actually a perfect outlet and you're the perfect host for this for me, because You have a really deep background in machine learning.

So you have an MA in computer science from Cambridge. You're a PhD student in machine learning at the University of Edinburgh. And you've finished your MSc degree in machine learning. So you obviously have a really deep background in this stuff. I know you haven't done in a while. But yeah, can you just, to get us started, can you talk a little bit about what you were studying in your graduate program and your experience with AI and machine learning?

Tom Griffiths: Yeah, absolutely. I mean, since I was a little kid, I was really into computers, really into robots and then AI. And so, yeah. Yeah. One. To study CS at undergrad, cause that was the pathway into the field. And then did a master's and a part of a PhD at Edinburgh, which was one of the leading places to study machine learning back in the mid 2000s.

So we're talking about, you know, 18 years ago or so at a very different time for the field. In fact, a time where the term AI was almost taboo. Because AI had kind of come and gone as a fad a few times and ended up ultimately disappointing through the sixties and the eighties. And so people just prefer to call it machine learning and not use the AI monicker, that's obviously all changed.

Now, what we were working on then was a different branch of the field to the one that's led to all these breakthroughs recently, but it was more in the kind of Bayesian probabilistic networks space, which is a different branch. It turns out that shortly after I left the field, dropping out to start companies, and that's what led me to FanDuel and then to Hone.

So going way back when I was doing the research, I was really enthralled by the idea of computer vision. We had very primitive models back in those days relative to what we have today. And, you know, the task might be identify a digit from handwriting, or can you tell a difference between apples and bananas?

Whereas now we've got some incredible breakthroughs through integrating multimodal approaches and models. And there's some great demos online of that, but I think there's some actually really interesting applications to the workplace and how we do L& D. So Alex, you know, you're up on the latest here. Can you take us through what's happening and how we might use that in the work that we do?

Alex Khurgin: To be honest, before ChatGPT came out, if you were describing your kind of journey with machine learning, I would have nodded very nicely and maybe my eyes would have glazed over, but now I'm genuinely really interested. And, uh, it's amazing how this, Kind of taboo subfield is all of a sudden dominating people's thoughts who have really no background in this.

So the first one is actually related to your background in computer vision. This is the first topic that we want to talk about. So the release of GPT 4V, which is OpenAI's vision based model that can see and hear and speak. It also browses the web again after OpenAI suspended that feature for several months.

People are already using it to do things like avoid parking tickets, so you can take a picture of some incredibly complicated set of parking signs and just ask, am I allowed to park right now? And GPT 4V will tell you yes or no. Some people are using it for Uh, probably better things for society. So transcribing prescriptions, providing summary of medication dose, dosage and so forth, doctors are known to have chicken scratch handwriting for some reasoning.

Uh, now GPT 4 V can help with that. And then getting into use cases that our listeners are probably more familiar with. So things like analyzing a graph. So if you have a chart, if you have any kind of visual, you could analyze it, you could forecast things from it. One, another kind of use case that seems like universally usable is the ability to generate code from an image.

So you can take a picture of a whiteboarding session and ask GPT 4 to just code up the, the UI for that, for that image in whatever language it is that, that you're using. And I started to think about how this might apply to L& D. So one weird idea I had that I wanted to try is to take a picture of a job aid.

And use GPT 4 to create something similar, but for a completely different skill. So if I like what a particular cheat sheet looks like or something out there in the wild, I think this is a good resource. Can I take a picture of it and see if I can get something similar for something completely different?

So this is kind of a weird idea, but I think it's really important because what it shows is. The value of tinkering at this time, because we're in this kind of in between time right now, we all find ourselves, we're inhabiting this incredible, but imperfect AI world where we have all these tools, but they're not deeply embedded in everyone's workflow.

And so I've been thinking a lot about the value of tinkering now, and. What's happening is you have all these AI tools that make a lot of mistakes, they hallucinate, they don't work exactly like you want them to work. And there's this unique opportunity for people who are willing to put in the work, because if it's hard or weird, that means that someone else is going to stop, but you don't have to.

So that means individual people can rapidly accelerate their craft, just by being really committed to using these tools and being willing to. Work through all these kinks and over time, this is going to go away, right? A lot of people talk about prompt engineering, how valuable it is right now, but you probably should anticipate within a few years that the I tools we're all going to be using are going to be customized for each of our roles.

They'll be embedded deep into HR project management systems, our communication systems, and we're going to need to provide much less context for AI tools to. Help us, because they're already going to know everything about us, about our co workers, about the work that we're doing together. But for now, it seems like tinkering is really important, and you get efficiencies, little efficiencies here and there, and people are starting to come out ahead.

What do you think of that, Tom? Have you been tinkering yourself, or what do you think this kind of in between time, what effect it's going to have on the workplace over the next couple years? 

Tom Griffiths: Yeah, I mean, I think it's a great way to characterize it as tinkering right now, and every big technology wave has a tinkering stage, right?

If you think of the home computer revolution, there were tinkerers in garages making those early models. And the early web, there were tinkerers kind of building websites and seeing what was possible. And then, you know, those projects developed into fully fledged, massive companies. The same thing is happening here.

And the sandbox is a nice visualization of that where people are tinkering and you know, we'll see how it can help people in their day to day work, how it can enhance products, how it can be the basis for brand new products. I think for me as a CEO of an existing company. Where we teach many different businesses, many different skills, it's a really interesting time because no doubt there are lots of ways to harness the amazing breakthroughs that we've seen recently, but that doesn't, it doesn't feel like it's going to come from top down.

Like, I've got a bunch of ideas, so does the exec team, but what I'm actually pretty excited to do is to empower the team. To tinker away, we need to create the sandbox so that it is safe to play in terms of proprietary data and personal data, but once we've created that, what we're allowing people to do is just hack together ideas and thoughts and show us what's possible in that tinkerer mindset.

So I think there's a way to bring tinkering into. a company and then, you know, may the best seeds of ideas win and how we can develop from there. 

Alex Khurgin: Yeah. I feel like from, from an L and D perspective, it really helps if you give people license to do this. And so, you know, a lot of people might not be totally comfortable with these tools.

They might be really immersed in their work and not really thinking like, okay, how can I take the tasks that I'm doing and potentially use AI to do them more efficiently or better. And it makes sense that from. L&D, HR, kind of people leader perspective, you can come in and say, you know, what we really care about is your growth and you are the closest to the tasks that you do.

And we're giving you the license to try to use these AI tools, assuming that you're doing them in a safe way. You're not exposing customer data and so forth, and that's a big important part of it. And so working with it to develop. Some ethical guidelines and just safety and security guidelines for the company.

But within those guidelines, helping people think about their capabilities is something that's malleable and something that can really be improved by playing around with AI. And as you mentioned, taking a bottoms up approach. 

Tom Griffiths: Exactly. And we hear all the time, like every day about the L& D teams that we serve being extremely stretched when it comes to resources and having so many team members to support so many learning initiatives that they'd love to be doing.

They just don't have the bandwidth. And so, if there are ways that learning professionals can get leverage on their time, maybe it's even just 2x. We'd love 10x, but just 2x would allow them to serve, you know, twice as many people, or, you create twice as many programs or get leverage from the time in other ways.

And so it's incredibly exciting, I think, to be experimenting in L& D on the front lines so that you can fulfill your purpose and your mission more effectively for more people. 

Alex Khurgin: We can say that up until now, the kind of story of AI seems to be a lot of individual dedicated people. Picking up little efficiencies in their work and in the overall scheme, these can really add up.

So, you know, the top down portion of it is giving people license to, to play around and there's now real indications that this kind of approach creates significant improvements in people's work, which brings us to the second topic I wanted to focus on today, which is that there's now a lot of research showing that AI significantly boosts knowledge worker performance.

Productivity. Specifically, there was a study done by a multidisciplinary team from Harvard, MIT, and UPenn. And they observed 758 consultants working at Boston Consulting Group, which is one of the three big management consulting firms, along with McKinsey and Bain. And 758 consultants is like 7 percent of their workforce.

So it's a pretty sizable chunk. And what they found in the study was that for 18 different tasks, that were selected to be realistic samples of the kind of work done by those consultants, those who were using GPT 4 outperformed those who did not on every single dimension they studied. So I was curious about the tasks and they were legit.

There are things like, okay, generate ideas for a new drink in markets that are underserved. Pick the best idea and explain why so that your boss and other managers can understand your thinking. Describe a potential prototype drink in vivid detail. Come up with a list of steps needed to launch the product.

That was just one type of task. Another task was generate ideas for a new shoe aimed at a specific market or sport. Uh, that is underserved use your best knowledge to segment the footwear industry market by users. So you had all of these varied kinds of tasks, which are seem like legit consulting tasks.

And the big takeaway for me, seeing that this kind of wide scale improvement across all of these different tasks is that, okay, this is confirmation that AI can in fact, help people do their work better or more efficiently. 

Tom Griffiths: Right. This is not the AI doing the tasks. This is them collaborating with the AI.

Alex Khurgin: Exactly. Yeah. This is these consultants. And I think, I'm not sure if they actually looked at like just the AI doing the task, but I don't think for these tasks, knowing what I know about GPT 4, it could outperform a human plus an AI in this case. The headline finding for most people seem to be that in the study, AI boosted low performers up, well above average.

But help top performers less. So a lot of people seem to be thinking that, okay, LLMs are just going to raise the baseline of work quality in general. And the people who are going to benefit most are those in the bottom half of skills or experience currently, which is incredible. That's this like great egalitarian opportunity.

But I think the truth is more nuanced because it seems like, so yes, across these 18 tasks, this is what happened. But I think In just my experience, having thought about different kinds of tasks, what I think is going on is that for roles or tasks that depend on standardized knowledge for which it's relatively easy to achieve peak performance, LLMs aid the most inexperienced people.

So, like, for a customer support inquiry. The perfect for a lot of them, the perfect response is relatively easy to achieve. Someone has a problem that you've seen many times before and you solve it for them. And in those instances, a support rep can use an LLM that's trained on the company policies to come up with that perfect solution, even if they're a novice support rep.

And so for this task, it's solving customer routine concern. It levels the playing field. But I think there are lots of other tasks that depend more on tacit knowledge. There's high difficulty in achieving peak performance. There's generally a lot of room to improve. And in those tasks, I think LLMs uniquely accelerate experts.

For example, continuing with the customer support. Idea, if you have a really complex customer support inquiry, where someone's like really emotional, they're asking for something that the company policy doesn't allow. They're maybe asking for something you've never heard of before. It's unlikely that this kind of scenario is going to go well if you have a novice that's just working with an LLM, because the novice is not going to be able to discern which of the recommendations from the LLM is best or makes sense.

They're likely to be overwhelmed by the emotional state of the customer. Whereas an expert might be able to calmly look through some generated recommendations and make the right judgment call. What do you think about that split and how, like, how relevant do you think this is for how we think about LLMs aiding tasks at work?

Tom Griffiths: Yeah, no, I think it's right. The way that the models work, of course, is that they're trained on many millions of examples of similar things and they kind of build associations and probability relationships between Tokens, and then those kind of roll up into high level of abstractions. And so they're best at things that are similar to what they've seen before.

And so mapped to a work task, if there's lots of examples of successful customer service, then yes, they can perform well on that. I think the expert can get leverage if they know how to use the tool, though. They might not have a lot of patterns from in. Decide the company for a particular novel HR scenario, but an expert could say, well, this is a bit like something else out in the world.

And so they can mine or probe the deep trained model to get similar responses and use their human judgment to decide how those external things can apply to an internal task. So I think it gives both. Leverage, but you're right. I think for more novel tasks, you need to have a higher degree of skill and judgment to make them work for you.

Alex Khurgin: I think another kind of underappreciated aspect of this is just willingness to use the LLM. And so the people, it seems to me who are like, and this goes beyond expertise or the difficulty of a task. It's just simply going to be mechanically true that. If you're willing to use the LLM and you're also willing to work through and tinker and deal with some of the challenges that come with using LLMs now, given some of their deficiencies, if you're willing to do that and to put in the work, that's going to be really the largest determinant of the value that you get from AI tools, because if you're not using them and if you don't make a habit out of using them, then you're going to have limited value, no matter how much of an expert you are, no matter what kind of tasks that you're working on.

Tom Griffiths: Yeah, and I think that's where it comes back to tinkering, because these tools are so new, it's not clear on the outside of the box, in some sense, what they can be used for. You need to experiment and play around a little bit to see the kinds of scenarios or use cases they could have for you and your particular kind of work.

I think one of the reasons that I've pulled back personally from, you Using them for everything I can possibly think of is the hallucination or inaccuracy concern. And so I'm curious if you've thought about or seen advice on how to quality check what you're getting out. 

Alex Khurgin: Yeah, there are different strategies.

One is just to think about what are the tasks that I have to do where it doesn't matter. Whether something is true or not. So I have specific examples of that with L and D where if I'm trying to come up with a case study in a class, it's perfect. I don't care about accuracy because I'm trying to come up with a fictional.

And so being able to tell an LLM, come up with an example in which an executive who's really well meaning, Is trying to drive their team to achieve a goal, but they're doing it in a way that isn't agile and it's more of a command and control example. And so I can use an LLM to very quickly generate some baseline example and then tinker with it from there.

So thinking about things where fact doesn't matter is really important. Another thing is to kind of index on how much expertise you have in something. So I find when I'm using LLMs. In fields where I do have a lot of tacit knowledge already, and I'm able to discern like, this is a good answer, this is not a good answer.

And this speaks to why for some tasks, if I have that expertise, the hallucination doesn't matter as much to me because it's easier for me to spot. But absolutely, You know, as we saw with, there's a famous example at this point already, where a lawyer cited all these cases that didn't exist and was admonished by the judge and people cite research studies that, that aren't real, that mention real researchers, but not real research papers that they ever wrote or real findings.

So you have to be really careful about that because LLMs can be really sycophantic or they're just trying to give you what you want. Like, I'm looking for research that says this. And they're like, well, yes, sir. I'm going to give that to you, even if it doesn't exist. So one of the things I do actually is in my custom instructions for ChatGPT, which apply to every single prompt that you create.

I write, don't be a sycophant. Don't just tell me, don't just agree with me for the sake of agreement. Don't just give me what I'm looking for if it's not true. And indeed, there's research showing that when you're telling an LLM how to behave and how to think and how to answer things, you're going to get different kinds of answers.

So those are just some ideas of avoiding hallucinations. But yeah, there's you're going to have to do quality checks, and there may in the future be people in the organization whose responsibilities are more. About checking accuracy of things, like we see now at the New Yorker, they don't like just let anything go through.

They have a rigorous fact checking process, but yeah, maybe we'll have to do that with other kinds of organizations and other kinds of work that we never imagined. 

Tom Griffiths: Yeah. Yeah. Another. Approach that can help is citing sources. And that's not necessarily as straightforward as it sounds because a lot of the models are aggregating lots of different sources.

Uh, but what's the, the latest on how well ChatGPT or others can cite their sources. 

Alex Khurgin: So Bard, I think does a pretty good job of linking to direct Google results. So it'll give you an AI answer. I've also found like Bard's answers seem to be directly plagiarized from some specific article that you can then link to, which is the whole other thing, whereas I feel like in a lot of cases, ChatGPT is generating text that if you were to Google that exact text, you wouldn't actually find it in Google.

And I know that ChatGPT now has the ability to browse the Internet. So I feel like that means the citations are probably going to be more accurate. But I've just come down to, like, anytime I'm looking for specific research, I don't just ask, what is the research and have it sum up, I ask it for specific citations.

And then I go and find those actual papers. And I found the hallucinations there to have been greatly reduced. I think one other way to think about hallucinations. Or to think about avoiding them is using a model that's trained on your own data, where I think if you use something like Azure Cognitive Search and you have it ingest all of your company's data and anonymize it, then you can basically get zero hallucinations, uh, where you have a closed set of data and every answer that anybody gets is just based on Some official data that has been created by your organization.

And more and more, it's going to be the case that we'll see companies having their own instances of GPT that their employees can use where they don't have to worry about fact, basically. 

Tom Griffiths: Yeah, I think that whole model is. The future certainly in existing companies where the model isn't a competitive advantage because anybody can get the model.

There are open source versions of these models. In many ways, it's the data that is proprietary to you that can give you a competitive advantage. And so if you're training your model on your proprietary data and creating value for customers out of that, then that's what gives you the competitive advantage, which You know, it's why in many ways, many of the breakthroughs in this recent wave benefit the incumbents like a Google or Microsoft who have the data sets to really have a proprietary advantage on the training side.

Certainly at Hone, we're thinking about all these amazing classes that we teach. We've got many of them kind of transcribed. And so. We can use AI to analyze what makes a great class or what might someone want to go deeper on, or what are some of the trends or themes across companies or within companies that are coming up in training that people need help with.

And so it's exciting to think about as a company, what is your data set that's proprietary and how can we get leverage out of that? 

Alex Khurgin: Yeah, it's the way you talk about it. It seems like. There are these different fronts of AI advantage where there's the individual tinkering front, which is really valuable right now.

There's the kind of L and D front, which is being aware of all the AI capabilities and helping people turn those into habits and getting those things into people's workflows. And then there's the kind of organizational strategy front around what is the data that we have that's going to enable us to have a durable competitive advantage.

Where we can train our models on this data that no one else has access to. And it's interesting that you mentioned Google. I'll try to say this without revealing almost any information. So a member of my family or a friend, I won't specify, has been offered. Many millions of dollars for his company's proprietary medical data from Google.

So these juggernauts don't just have their own data that they're working with. They're looking to buy more and more data because they realize that this is the ultimately the most valuable front. And after the kind of tinkering window goes away after a few years, data is going to be truly the most important source of competitive advantage.

Tom Griffiths: Yeah, agreed. 

Alex Khurgin: So here was another interesting finding from the paper. So this paper on the Boston Consulting Group also talked about the so called jagged frontier of AI, which is meant to as a metaphor for the wildly different capabilities that AI has, right? So you mentioned Individual people don't know in advance what AI can do.

In fact, the companies that train these models don't even know, as far as I understand, with GPT 3, OpenAI did not expect GPT 3 to be able to code at all. It was just this like crazy emergent behavior that they didn't expect that occurred because the model was trained on all of this. Code as text and so there really is this kind of wild playground or jagged frontier for capabilities and there are different ways of kind of dancing along that jagged frontier and I, I was surprised to see this kind of poetic language in academic research.

The researchers talk about two different kinds of ways of collaborating with AI or two different kinds of AI workers. So there are centaurs who break down tasks into subtasks. So they assign some of the tasks to ai and they do the rest themselves. So there's this like clear demarcation in the way. The center is half man, half horse, and you could see where the human ends and where the horse begins.

There's this clear DeMar demarcation between the work. And this was true that some people who were doing those tasks, uh, like trying to come up with a new kind of shoe for a d for an underserved market and and so forth, they did those tasks in those ways. Another model is the cyborg model where people go back and forth between their work and AI and they use AI for every task and whatever they get out of the AI, they question its output, they tweak the prompts and then the end set is this set of tasks that they did where it's kind of unclear how much of it was human, how much of it was AI.

It's just this big jumbled mess. And both of these styles of collaborating with AI can be highly productive. And I really have no sense of when someone might be more of a centaur and when someone might be more of a cyborg. Uh, which one ultimately makes more sense for, for what kind of person or what kind of task.

But there really is this distinction and I, I think it's going to be an interesting thing to observe. Again, as people tinker, what the process is for them and how it kind of shakes out and what the science becomes around, you know, The best ways to collaborate with AI tools, what, what is your initial gloss on this just from seeing these two different categories?

Tom Griffiths: Yeah, I mean, it sounds like there's a spectrum, right? It's not bifurcated as cleanly as this. Some people may be the extreme center who has very demarked tasks. For the AI and then others could be a mishmash of like revisions and then inputting a bit more and maybe they do a revision of the passage of text that the AI does and put it back in.

At what point do you flip from being a centaur to a cyborg? I don't know. I think it's just a useful spectrum to think about. I imagine that as people get more comfortable with the capabilities of the AI and the tools that most people would trend towards cyborgs, just where there's. Not a clear distinction between their work and the AI's work.

That feels like the most efficient, fluid, productive way to do it. Yeah, sure. There'll be some tasks that you can chunk off, like make an image. I'm not going to be like making some of the image. It'll be the AI making all the image, but. It just feels so interactive, which is one of the beauties of this modality, that I think most people will end up there.

So, Alex, I mean, thanks for a great conversation on a bunch of ways that L& D can leverage AI or workers in general can leverage AI. What would you recommend for L& D or HR leaders now to Stay ahead with AI and find new ways to get leverage in their roles using these amazing tools that are coming out every week, it seems.

Alex Khurgin: So I have one specific recommendation and one general recommendation. So the specific recommendation is to try out this incredibly fun quiz. It was very fun for me. And if you're interested in AI capabilities, it will be fun for you, but that's the filter there. There's a there's a quiz that helps you calibrate your knowledge around what GPT 4 is actually capable of.

So the way it works is it asks you a question and it asks you, can GPT 4 solve this problem? Or will it give you the right answer? And then you indicate your confidence level. Like I'm 95 percent sure that GPT 4 Can solve this problem or do this task and then you see whether you were correct and you can compare your calibration to other people's calibration.

I found this to be a really illuminating exercise to understand like, oh, wow, there are these capabilities. I had no idea about and I thought it was actually really good at this, but it's not good at this. So just doing this kind of quiz. Is a great way to calibrate your own knowledge. So, so that's one specific recommendation.

The more general recommendation is to break down your tasks as an L& D person. Let's say, so if you're an L& D leader, you're responsible in some way, probably for building your learning team and setting goals for the team, analyzing needs, rolling out programs, measuring, learning, all of these different things.

And there are people out there in your role who have been trying to do these things using a I and they love to talk about it. They've been talking about on linked in. Are they writing a blog post or maybe they have a tool that they created? They're a vendor and their tool actually lets you do these things.

So just going out there and searching for what people are doing with those specific tasks. And using your knowledge of GPT 4's capabilities, going out there and tinkering and trying things out and really methodically thinking about, for each task, how can I do this better using an AI? I mean, you can go a level further than that.

After you do it for yourself, you can help individual teams do that. So helping each team leader or every individual list out the tasks that they work on and to think about, how can I be a cyborg or a centaur with this task? To do it more efficiently. That's great. 

Tom Griffiths: You have a deep background in L& D and have gone deep on AI, but you don't have a technical background in AI or machine learning.

So what resources have you leaned on the most to learn about the field? 

Alex Khurgin: Yeah. So there are a few in particular that are probably the ones that lean on the most. So there's something called the 80, 000 hours podcast, which is. These long in depth interviews with people who are building all of these new and incredible AI tools, and also people who are trying to keep those tools safe.

And sometimes those people don't overlap and it's kind of interesting when they don't. But that podcast is great because it's talking to people at OpenAI, at DeepMind and so forth, who are actually building these tools. There's also a guy named V. Mauschwitz, who was a former Magic the Gathering champion.

But I know him because he has a substack where he does this incredibly in depth weekly AI roundup that's called Don't Worry About the Vase. And I think I kind of resisted him for a little while because I'm suspicious generally of anyone who bills themselves as a thought leader, you know, as a LinkedIn influencer, but, uh, Ethan Mollick, uh, who I think a former guest, uh, talked about as well.

He really knows his stuff. He's a professor at UPenn and he also has a subsec where he goes in depth into all of these topics. And I think those are probably go from most to least technical. So the most technical stuff comes from 80, 000 hours. And if you want to get really technical, the website that has changed my life, I think the most in the last year is called lesswrong.

com. And it's where like the smartest people in the world argue with each other in purely rational terms about the future of AI. So it's this really vibrant community of like credibly intelligent people. And yeah, because I have this deep L& D background, I've been doing learning technology for 17 years.

And for most of that time, I've been focused on workplace learning. I apply that lens to anything I'm learning. So I come away, I think, with a unique background. 

Tom Griffiths: That's great. And, you know, it sounds like some fantastic resources there, but I also know you're diving in and playing and tinkering with the tools as well.

So, you know, what are some of the playgrounds that you've been using the most? 

Alex Khurgin: Yeah, I'm kind of basic in the sense, I like my brands. I like Jordan brand and I'll eat a particular cheesesteak or something, but I've tried out different stuff, but I think UBT is the best. GPT 4 is just the most powerful model out there, and that appeals to me.

I'm happy to pay, you know, 20 bucks a month to get access to the smartest AI out there. It's weird to me if I'm going to be using AI to not use the most powerful one, which I think my perspective is probably similar to a lot of people's. And I think it tells you where we're heading with things, where we're always going to be trying to get more and more capabilities and whatever is the most capable thing that's going to be most attractive to people.

And so. There's a lot of incentive to build a more capable model than GPT 4, but yeah, GPT 4 is great. I've tried out Bard and Claude and a bunch of others and yeah, for other kinds of capability reasons too, like the voice mode that came out recently, GPT 4 Vision. They're really ahead of the curve for the time being.

Fantastic. 

Tom Griffiths: It seems like overall AI is going to be a big boost to productivity, but have you seen, or are you thinking about any instances where AI might likely hurt productivity? 

Alex Khurgin: There are. Examples I've seen from radiology studies, let's say, where they found that in general radiologists performed better using AI than those who didn't and also perform better than AI alone.

And this is true for many kinds of tasks, but there are instances where Using AI leads you to perform worse because you mistrust the AI. So if you don't have inherent trust for the tool that you're using, whatever recommendation that tool is making, you're subjecting it to a kind of scrutiny that ends up slowing you down and worsening performance.

And so if you're not calibrated to like how good the tool actually is, it just kind of interrupts whatever process you might be running. So trust is a really important part of it, especially if you're a learning leader. If people don't trust that AI can help them, or if they've tried to tinker and they came up short.

So Tom, are you familiar with that? With the meme of the miner who's like mining for a diamond and he like gives up with a few inches left where all the diamonds are, but he just gave up right at that point, you know what I'm talking 

Tom Griffiths: about? I haven't seen that one. 

Alex Khurgin: Well, I'm sure our audience is more online than you are.

You know, I In general, that might not be the best lesson for people because the sunk cost fallacy is real. And oftentimes it really is actually just better to cut your losses. But using AI is not this like one time thing you're trying to do. And if it doesn't work out, then you might as well cut your losses.

It's this thing that people are going to be doing continuously for a while. And it does make sense to figure out. And in the meantime, you may waste some effort, but it's worth it. And so if you're an L and D leader, helping people get that trust is going to be really important. And helping them kind of work through any challenges or roadblocks that they're facing By maybe doing some kind of training or just helping people learn from each other 

Tom Griffiths: Yeah, and as you're thinking about people getting familiar with where they can and can't trust the tool.

What's a good mental model? I've heard people compare it to now you've got a direct report on your team or a new team member and you need to spend time with them to see where the strengths are and kind of the edges or the issues are. Yeah. Is that the way we think about it? 

Alex Khurgin: Yeah, I do think so. I think there are a lot of practical benefits from viewing an LLM as an employee or as a teammate.

In the sense that you actually get better performance by using prompts that are like, let's take a deep breath and let's work through this step by step. So in, in one sense, the LLM for some tasks is kind of like your intern or your employee for other tasks. It's more like your teammate. That you're basically even with and you're kind of collaborating with and then for other tasks, it's like the super genius that you're asking a question and you're graced with its presence.

But in all of those instances, it helps to be polite and professional, actually, and. It also, I think, puts you in the mindset of coming up with better prompts. Like if you're thinking about the tool as this robot that you're like, I just want to speak to an agent and you're cursing at it and stuff. You're probably not thinking in the clearest terms.

You're probably not coming up with good follow ups. And so it's a kind of magic thing that happens where you do kind of view them more as a person. And maybe that's going to be a problem down the line that we'll almost certainly have to deal with. But. In the meantime, doing that little magic thing, I think has a lot of practical benefits.

The tool just works better. 

Tom Griffiths: That's a great tip. Curious if you think there are other things that people might be getting wrong about AI. It's in the media a lot. There's a lot of hearsay and stories and conjecture. What do you think people are getting wrong? So 

Alex Khurgin: I've seen some friends actually post their intentions of being a prompt engineer, saying that they wanted like a prompt engineering job.

And I had these assumptions too when I was first learning about AI, but I think what's already kind of happening, if you look at AI implementations in specific tools that people use a lot, so like Salesforce, you're interacting with an AI by clicking recommendations. Largely, there are some prompt engineering that goes into it, but a lot of it is already kind of baked into the product and the workflow.

And it makes sense that tools that have a lot of people on staff who are trying to make it as usable as possible are going to probably give you more recommendations than give you this kind of blank interface that lets you do prompt engineering to get way more use out of a platform than other people.

Products want all of their customers to do well. So I think in that sense, prompt engineering doesn't really make sense as a future state. And then further. Insofar as prompt engineering is important. I think it does kind of get cannibalized by a few people in the organization. So Salesforce Einstein, for example, has these templates that you can create.

So you can get your savviest salesperson to go in and put together some templates that everybody else uses. And so this just general scale of prompt engineering doesn't really make sense to me as something to index toward. That said, I do think it's really important. Prompt engineering still matters now, but I think like the future is.

Just thinking really clearly about what it is that you want and maybe not using that as input, but more using it as a way to evaluate the output. So an AI is going to do something that then you look at with your judgment. And if you're clear in your judgment and in your values, in your standards, then you can look at that output and identify if Something needs to be changed or edited or whatever it is.

And I think that's the more valuable skill and where people should be focusing on. 

Tom Griffiths: Yeah, I agree with that. Prompt engineering seems more like a skill that you bolt on to your existing responsibilities rather than a pure play dedicated role. Like if I'm a salesperson, as you said, or I'm a law clerk. I need to learn how best to use the tool and interpret the output, but that's just an extension of my existing role.

So it's like a skill based thing. So to bring it back to L& D, how do you think a L& D leader now should go about experimenting with or implementing AI in their workplaces? 

Alex Khurgin: I think there are a few pathways for experimentation, and I don't know which one makes sense to start with, but I think all of them are ways of thinking about it.

So one is. a task centric approach where you help people identify what are the tasks that you do? What percent of your time do you spend on each of those tasks? And how much time could you save? A lot of people might not know the answer to that, but there are ways of answering that question. So you can literally Google task plus AI.

So, you know, training needs analysis is one of your tasks. You're an LLD person that probably is to some extent. You can look up AI training needs analysis, and you'll get an understanding of how much of this task can actually be accomplished with an AI, and then you can get each individual, or maybe if you're an LLD leader, you're maybe just talking to department heads.

Initially, you're understanding training. What are the opportunities where we can deploy maybe a single AI tool to meet as many of these opportunities as possible, because we know how much time we could potentially save. And then as part of that exercise, helping people think about where else could you devote your extra time and energy if you were saving time doing these things.

So that's the task centric way of thinking about it. Another way of thinking about it is. Capability centric. So just being aware of the capabilities of the large models. So there's vision and voice and hearing and cross modality, meaning like you can turn any one thing into any other thing. So I've been on the content side of L and D for a long time.

You know, my whole career, there's always a question around what should this be? Should this be a video or audio or a blog post or a white paper? Well, you can now do all of those things. You can turn a book into a game. You can turn a law contract into a decision tree or something. And you can turn a, a blog post into a podcast into this other thing.

And so these capabilities. And you could potentially think about them for each one. You can try to think about vision or voice or whatever, and ask, what are the existing tasks that we do that this applies to, or you can think about what is the new valuable stuff that we couldn't have ever done before that we can now do with these capabilities.

So one of those things is, for example, hone is a learning company. And it would be really interesting if we just could translate all content into any language out there. There's a third pathway too, which is people centric. So you could, for example, ask around or send a survey out and see who's using AI in your organization.

Which teams do they sit on? And let's talk to them and ask, what are they learning and what would they like to share with others? You can also identify people who are skeptical and see the risks. So you might want to talk to them and make sure that what you're doing with AI is secure and wise. And then there are also people who are not just using AI to do their jobs better, but they really understand the technology and they can help develop a strategy for the future of your business and how your organization gets work done.

Being AI savvy is this completely new capability that has come online and It's not necessarily the people who you would expect that have it, and it's not only the people who are currently making the biggest decisions. So there are people in your organization who have this new superpower all of a sudden that can really help you strategically, both in terms of the business overall and in terms of processes internally.

Tom Griffiths: Yeah, it makes sense. And it can create this beautiful connectivity between folks in different departments. connected through this shared interest or expertise and find really interesting applications within the company by getting those different disciplines together. So it's a great point. 

Alex Khurgin: Are you ready for the twist, Tom?

So the twist is, I think there's a fourth pathway, which is looking at the tools. That you're using like Slack, Lattice, Salesforce, Notion, Asana, Google Docs, and all of their equivalents. So meaning all of the tools where work gets done in an enterprise, they all have AI built into them. And then all the big AI labs have enterprise offerings as well.

And if you've already bought tools or built them, you can start by looking at those. And what are the AI capabilities of those that may have just come online in the last couple of months and whether people are actually taking advantage of them. Because you've already paid for these tools, a lot of them just have the AI functionality now built in.

You don't have to pay anything else. And people are just not using those things. So there may be a meeting transcription tool that you're not using. Or there may be a tool, for example, we use Descript to edit. Podcasts and to edit audio in general. And there are all these amazing AI tools that like, for some tasks, cut the task in half or more, or let you do things you couldn't have possibly done.

And so just looking at the tools you have already paid for, it's probably a good place to start as well. 

Tom Griffiths: Right on. That's great. Super helpful. Thanks, Alex. And I just want to say, you know, deep thank you for all the research you've done and just the insights and the depth of knowledge you've shared with our listeners here today.

Hope we can have you back sometime to do another episode. on AI and L& D. 

Alex Khurgin: I would love to be back, Tom. Thank you so much. 

Tom Griffiths: Thanks for listening to Learning Works. If you've enjoyed today's conversation, we encourage you to subscribe to the podcast for our exciting lineup of future episodes. Learning Works is presented by Hone.

Hone helps busy L& D leaders easily scale power skills training through tech powered live cohort learning experiences that drive real ROI and lasting behavior change. If you want even more resources, you can head to our website honehq. com. That's h o n e h q. com for upcoming workshops, articles, and to learn more about Hone.

Develop universal skills and drive growth with practical, real-world insights.