Previous Shows

The Digital Download

How AI and Leadership Drive Business Growth

December 13, 202448 min read

This week on The Digital Download, we’re looking at the intersection of leadership and technology with Arnaud Lucas, CTO and VP of Engineering at Wanderu. With a track record of transforming organizations and driving exponential growth through AI and innovation, Arnaud brings unique insights on how technology can create scalable solutions and build high-performing teams.

Join us as we explore questions like:

• How can leaders align AI strategy with business goals?

• What are the pitfalls of innovation without a clear vision?

• What role does leadership play in sustaining technological growth?

• How do you build high-performing, diverse teams?

Arnaud’s hands-on experience in scaling teams, leveraging data, and driving innovation offers valuable lessons for anyone looking to thrive in a tech-driven landscape. Gain new perspectives on aligning vision with execution for lasting impact and discover how forward-thinking leadership and strategic use of technology can foster innovation and unlock growth.

We strive to make The Digital Download an interactive experience. Bring your questions. Bring your insights. Audience participation is highly encouraged!

This week we were joined by our Special Guest -

This week's Host was -

Panelists included -

Transcript of The Digital Download 2024-12-13

Rob Durant [00:00:01]:

Good morning, good afternoon, and good day wherever you may be joining us from. Welcome to another edition of The Digital Download, the longest running weekly business talk show on LinkedIn Live, now globally syndicated on TuneIn Radio through IBGR, the world's number one business talk, news, and strategy radio network. Today, we're discussing how AI and leadership drive business growth. We have a special guest, Arnaud Lucas, to help us with the discussion. A CTO and VP of engineering at Wanderoo, Arnaud's hands on experience in scaling teams, leveraging data, and driving innovation offers valuable lessons for anyone looking to thrive in a tech driven landscape. But before we bring him on, let's go around the set and introduce everyone. And while we're doing that, why don't you in the audience reach out to a friend? Ping them. Have them join us.

Rob Durant [00:00:59]:

We strive to make the digital download an interactive experience, and audience participation is highly encouraged. Right. So with that, introductions. Bertrand, would you kick us off, please?

Bertrand Godillot [00:01:14]:

Sure. My name is Bertrand Godillot, and I have to say, so I'm I'm I'm the founder and managing partner of Odyssey & Co, working, with customers to generate more conversations. And I'm very happy to be here today because we will be 2 French native in that call, and I'm I'm so glad. That doesn't happen very often.

Arnaud Lucas [00:01:34]:

Tres bien

Rob Durant [00:01:38]:

3? I thought he said 2. Thank you, Bertram. Tracy.

Tracy Borreson [00:01:47]:

Good morning, everyone. Tracy Borreson, founder of TLB Coaching and Events, where we care about marketing that works for businesses, not just marketing to do marketing. Proud partner of DLA Ignite. And I'm too really excited because I'll tell you, there's a lot of talk about AI and marketing, and most of it's not useful. So I'm super happy to have a good focused conversation on how can we benefit from today.

Rob Durant [00:02:13]:

Excellent. Thank you very much. Adam.

Adam Gray [00:02:16]:

Hello, everybody. I'm cofounder of DLA Ignite. And, a conversation about AI is always welcome, so I'm looking forward to this.

Rob Durant [00:02:26]:

Excellent. Thank you. And, Tim?

Tim Hughes [00:02:28]:

Thank you. Welcome, everybody. Tim Hughes. I'm the CEO and cofounder of DLA Knight, famous for, the, writing the book, Social Selling Techniques to Influence Bias and Changemakers. And as Adam said, any discussion on AI, I think, is excellent because I think there's always something that we can learn.

Rob Durant [00:02:47]:

Absolutely. And myself, I am Rob Durant. I am the founder of Flywheel Results. We help start up scale, and I too am a proud DLA Ignite partner. As I said, this week on the digital download, we'll speak with Arnaud Lucas. With a track record of transforming organizations and driving exponential growth through AI and innovation, Arnaud brings unique insights on how technology can create scalable solutions and build high performing teams. Let's bring him on. Arnaud, good morning and welcome.

Arnaud Lucas [00:03:25]:

Arnaud, thank you for having me here.

Rob Durant [00:03:28]:

Absolutely. Arnaud, let's start by having you tell us a little bit more about you, your background, and what led you to where you are today.

Arnaud Lucas [00:03:37]:

So as you can hear from my accent, I'm from France, as Bertrand suggested. But I've been living in the US for 26 years, so it has been a while. First 8 years in Texas. Should have been there to follow my love, I guess, my wife now. And then when she was done with her studies, moved to moved to Boston, I think, where I've been for for the last 18 years. And, so as you mentioned, I'm a CTO slash VP of engineering. I like to lead and inspire high growth technology firms for organizations and customer focus to really kind of solve customer problems. I use technology as a tool more than anything else.

Arnaud Lucas [00:04:20]:

And, yeah, I like to create high trust, high performing teams. I like to match strategic ambitions to technological execution and whatever that means. So that's me.

Rob Durant [00:04:33]:

Excellent. Thank you. So let's start with a foundational question. Mhmm. How can leaders align AI strategy with business goals?

Arnaud Lucas [00:04:46]:

So that's that's the thing. Like, AI is a tool. Right? And by the way, AI is not new. Like, you know, my minor was actually in AI, so it was a while back. Generative AI, I think, which is the new the newer concept of AI is is still a tool. Right? I think people kind of confuse that a bit, where the there is an expectation that every company should be using Gen AI at this point. There is an expectation by the market, by the investors. And I think this expectation is because contrary to the previous waves of technology like crypto and blockchain, is more like the there's AI or Gen AI is such a tool that can be used in different context, and every company should be able to use it.

Arnaud Lucas [00:05:32]:

Now I understand that expectation. Right? Yeah. If you cannot do Gen AI, it means that you are not innovative enough. Right? That's kind of the reasoning behind it. The plan with that is that when you look in the reality of things, is that there are really too many use cases of AI. Right? There is the copilot use cases, which is you use AI as a way to help you do your job. And to say some companies have little say about that. I will argue that employees may just use AI without even companies knowing about it.

Arnaud Lucas [00:06:06]:

So strictly speaking, I'm sure that every company out there is actually using Gen AI in some ways, but they may not know it. The other use case is more likely able to integrate Gen AI as a tool into your into your product like, what you are selling to your customers. That becomes way more complicated because coupes today have created a lot of prototypes. Right? You know, they have they have saying, oh, well, we can have a chatbot for our customer service. We can do this and we can do that. Like, there are plenty of places where generally it seems like a good idea. And to do to create this for today is actually not that hard. Right? You take a model, You feed it with some of your proprietary data.

Arnaud Lucas [00:06:49]:

In some way, you can just hide. You can do fine tuning. You can do all your own training altogether. And from that, you're like, okay. Let's ask questions, get answers. Everything's good. That extra insights. And the plan is is not that.

Arnaud Lucas [00:07:05]:

Right? Because it's a tool, it works well most of the time. The plans have the hallucinations. And because of the hallucination, that's what prevents companies from going to production with these tools in a lot of cases. Like, you know, people are doing a lot of prototyping right now, but not that many actually making into production. And I think that's the biggest hurdle. Right? It's how do you mitigate hallucinations? How you you're able to effectively, make it so that, yeah, you don't create more risk for the organization and you create that liability even for your organization with AI doing some crazy stuff because now you have this autonomous thing that at the end, it doesn't really think. But that's not what Jenny does. It doesn't it doesn't do any reasoning.

Arnaud Lucas [00:07:52]:

It only try to guess what's the next token in a in a in a string of, of of tokens. So so I think that's the that's the tough path. So I do believe that, ultimately, it's all about trying it out and see what the and see where it can help. I will argue that at first, you want to use an approach called as responsible AI, which is effectively that you don't trust AI. Right? So you have usually, that means that you have a human in the loop, or you have some of the ways to check that, you know, whatever AI is doing is right instead of exposing directly the output, if not of anything AI directly to your customer. Right? I think that's, for me, that's the biggest thing for now that a corporation should be considering. Finding these use cases, right, about how to integrate AI into your strategy is actually not that hard because AI can be used for all kind of things. Right? So it's more about what's your critical need, where is there a critical opportunity for improvement, for optimization, and then prototype it.

Arnaud Lucas [00:09:05]:

And then the most difficult part in my opinion is how do you box it so that doesn't go wild.

Adam Gray [00:09:12]:

So you you spoke about, AI hallucinations where AI misinterprets the landscape, and it goes down a path which is inappropriate or fundamentally wrong.

Arnaud Lucas [00:09:24]:

Yes.

Adam Gray [00:09:26]:

Now I have not tried we as a business have not tried to package up AI and sell it to customers. But from an observational perspective, when I look at most human beings, they they want to be lazy. Mhmm. We all do. We want to do the least work possible to achieve the outcome that we want to get. And AI seems like a perfect tool to enable us to do this, whether that's writing something that you're not confident writing or you haven't got time to write or automating a marketing process or whatever it may be. So isn't the danger that AI gets better and people simply ignore the hallucination possibility? Go, ah, that that would be fine because it saves me spending 6 hours doing this and AI can do it in 10 seconds. And they just switch it on and off it goes.

Arnaud Lucas [00:10:21]:

So let me give you an example. Because I think that's a copilot model I was mentioning where people rely on the eye to do their job in some shape or form. Right? And I do believe that's a use case for that. I do believe that's the safest use case if people don't get lazy, to your point. However by the way, I don't think people are lazy. I think people try to optimize.

Bertrand Godillot [00:10:45]:

French way of saying it. No.

Arnaud Lucas [00:10:46]:

Actually, that's not actually that's good to hear. But, try I like to optimize my time so I can do more gardening and and baking and other things. But I'll give you an example. So one evening, my so I have a 7th grade daughter, and so she came back from from school and I came back from work, and she says, dad, I have this math problem that nobody in my class is able to solve. It's like, okay. It's kinda surprising. I was like, yeah. We ask the dad of my friend with her MBA graduate.

Arnaud Lucas [00:11:19]:

They could not survey it here. I was like, oh, interesting. That seems like a change. And so so what I I look at the plan, I solve it. Nothing help. Right? But and then I do all the calculations. I come up with the results. We plug the results into the into our app.

Arnaud Lucas [00:11:35]:

Right? Because it's a website that you can just, like, put the results in. And the website says, oh, your result is wrong. At that point, it's like, oh, most likely your teacher put the wrong answer. It's not you know, just that. Right? But I was like, you know what? Alright. We're going to double check with ChargeGPD. Right? So put the plan to ChargeGPD, walk forward. ChargeGPD give me the right step by step.

Arnaud Lucas [00:11:59]:

Don't get that real screwed. Right? They are matching. That's great. Plugs the right number. Great. Give me the wrong result. And now that's not surprising because my tragedy or any generative AI doesn't do math. So expecting it to do math is a is a is a plan.

Arnaud Lucas [00:12:17]:

But, yeah, a lot of people don't know that, especially one of, my daughter's friend who so I I double checked my number just to, you know, and tell her, yeah, I'm white. Checkatrade is wrong. Right? But one of, my daughter's friends started saying, oh, well, I've I've found the right answer. This is the right answer. That's exactly the answer that ChargeGPD gave. And she was saying, oh, this is the right answer. I'm sure of it. Just because she has the, yeah, yeah, yeah, I put something in there.

Arnaud Lucas [00:12:49]:

So I think for me, it's not because people may be lazy even. So there is an opportunity for people not to double check. But AI is saying, like, I've seen plenty of issues, for example, even for my own use case. So I'm searching a company and they ask the AI, so when was the company funded? And they say, give me a date. And let's double check. And now it's a different date. It's like, so there is some double checking when this happened. But, also, what I'm more worried about actually is people not even recognizing when there is a high destination and taking that as a truth.

Arnaud Lucas [00:13:22]:

Right? Because it's all done and so much is doing under the hood or or just because, you know, there is lots of magic that happens in there between the input and the output.

Adam Gray [00:13:32]:

Yeah. So, I mean, I think we we get that. You talk about the school situation. You get that with your children anyway, don't you, where the teacher says it's a and you say, no. It's not. And that they say, yeah. But my teacher says it is, and then my teacher said they must be right. And we've now got AI to contend with in the same way.

Adam Gray [00:13:49]:

So the problem is is one of me not having the confidence in my answer

Arnaud Lucas [00:13:56]:

Mhmm.

Adam Gray [00:13:56]:

And assuming that because AI says it it is this, it is this. So so how how do we safeguard against that?

Tracy Borreson [00:14:04]:

Can I just add, like, a little on this before I go ask you this question? Because something I think of as you're describing that, I don't know, is, like, I feel like there's situations where there is a right answer. Like, 2 plus 2 is 4. And so if I ask you 2 plus 2 is 4 and you tell me it's 3 and you're hallucinating, then that's a problem because it's actually now a wrong answer. If you ask the question, when did this when was this company founded? And it's not the day the company is founded. It's actually a a wrong answer.

Arnaud Lucas [00:14:36]:

Mhmm. But

Tracy Borreson [00:14:37]:

I also think there's a potentiality to take advantage of that? Because there's also questions that don't have right answers. And even one of the phrases Adam used was my answer. K? So it's like, what's the difference between my answer and the answer? And does that make a difference into how we might want to use generative AI?

Arnaud Lucas [00:15:05]:

That's a good point. For me as an engineer, I see it a slightly different way. Right? I mean, that there is you're right. Right? There sometimes there is not hard answer, but that's why we call them hallucination and not bugs. Right? Because bugs suggest that there is a problem with the code. Right? There is a software issue that you know, and therefore, it you know that it's supposed to be right and it's not doing the right thing. So, therefore, there's a bug in the software that caused it to to not the right thing. But here, for JNI, we talk about hallucination exactly for that purpose because how it can go wrong can actually, you know, some of that we just talked about what they call factual hallucination.

Arnaud Lucas [00:15:47]:

So it's so it doesn't give you the right facts. Okay. To your point, that's easy to check. Not only that, but to mitigate that or to verify that, that's easy. Or you can have your own dataset. You can say, okay, Johnny, give me this. Is it fine? Oh, okay. Well or so that's actually the easier kind of mitigation of hallucination to deal with because you can just check it.

Arnaud Lucas [00:16:07]:

But there are other kind of hallucinations. In other case, logical hallucination, which is funny that it's even called hallucination because gen AI does is not logical. So it's like, it's like, okay. That's interesting kind of bug for something that's that's not what it does. And then there is contextual hallucination. I think that's what you're getting to do, which is, when the model generates a hot output that conflicts with the provided instructions or context. Right? So, well, you ask for something and, you know, you may not have given everything there should be about that thing. And, therefore, the, you know, the model tries to do its best to guess what should come next, but it goes in the wrong direction.

Arnaud Lucas [00:17:00]:

And so and that can your question is that is you know, there's nothing wrong or right about it. It's just the way it is. And it's it's a very subjective, understanding from your side. Oh, you gave me the wrong answer. But, yeah, maybe you were maybe it's the user error or the user was wrong. So so I think that's does that. And in that case, you know, job what what the tool generates is perfectly valid. And, and it's just that it doesn't meet your expectations.

Arnaud Lucas [00:17:36]:

And then the question is how to make sure, you know, that's what things like pump engineering is one use case where you kind of massage your prompt to get to the right answer that that you are expecting.

Rob Durant [00:17:50]:

We have a few comments from the audience. 1st, Andrew Schlesser checking in and says, for those who wonder why all the presenters wave at my message, it's that I wave back to them. This interaction shows the power of social media. Andrew, it's always good to see you. And Rob Turrell shares with us, if anyone watches the UK program, is it QI or key? QI. QI would question everything we were originally taught. We are also learning from different biases too from history. Good point there.

Rob Durant [00:18:26]:

So there is a case to be made for your fact and my fact. But I don't think that applies to some absolutes like 2+2.

Tracy Borreson [00:18:38]:

So But then where does it apply and where does it not apply?

Rob Durant [00:18:42]:

Oh, I'm sorry. You wanted the philosophical show that that

Arnaud Lucas [00:18:46]:

that's I think it was a business show. I thought it was a good

Arnaud Lucas [00:18:48]:

think it was a video show.

Arnaud Lucas [00:18:49]:

I thought I'm a good cop There's

Tracy Borreson [00:18:51]:

a lot of philosophical stuff happening in the world of AI. Just say it.

Rob Durant [00:18:54]:

You're you're you're not

Tim Hughes [00:18:56]:

So so I've got a question. So the thing about the, you know, the the show's about AI and leadership. And what I'm interested in is what should the leadership be doing about AI? Because I think that, probably a lot of the it's a it's a bit like social media. A lot of the employees will, have, will be able to access social media. And even if you block it on, on desktops, they'll still, access it through, mobile. And so, you know, AI will have seeped into an organization whether the leadership like it or not.

Arnaud Lucas [00:19:35]:

Yes. That's correct. And I think, in that case, the goal is that is actually to embrace AI. You know, don't let you have a choice. Or, like I said, there is an expectation you should be using it anyway. So, for every company. So I think the question is not if, it's how, and how to impress, Gen AI. Same thing to your point.

Arnaud Lucas [00:19:59]:

The employees would use it anyway because, so what companies have done, if we could, is point their own AI for employees to use, effectively, and then try to prototype things that potentially they could launch in production in within their own products. Right? So to to kind of feel for that, at least for that market reason. Hopefully, there there there are some good use cases that they can still use for for AI. The the main issue is that, you know, we talk about hallucinations a lot and it's not as easy as just, oh, this is a great tool and we should use it and we're done. Right? There is a there is that notion of responsible AI that that companies then needs to figure out how to implement. Right? Because you can't just to your point, you can't just let it go free. You have to kind of box it in some ways. The best way, obviously, is that you apply your own AI.

Arnaud Lucas [00:20:53]:

Right? And that and then you can put a quite verification layer on top. You can, you know, you can mitigate things in different ways. There are different techniques for doing that. But our employees that say that the AI that has been pointed to them by companies is actually not that great. So in that case, they are still going to go outside the box and say, hey. Let me ask ChargeGpdd instead. The plan with using tools like ChargeGpdd, there are multiple issues. Right? One is that ChargeGpT is meant just focus on ChargeGpT, but it can be cloud.

Arnaud Lucas [00:21:28]:

It can be a Gemini. They are meant to be generalist. Right? They are they are meant to have been trained on a lot of different type of inputs from everywhere, sometime in a conflicting fashion, actually, and they are not meant to solve any problem in particular. Right? They are just giving you this broad understanding of the overall. So they are not necessarily created for specialized task, and that's why people are talking about, you know, training specialized models on their own data for their specialized use case instead of using this generic model. And so that that's an interesting aspect is that I don't think, at least at first, I don't think companies within their own product will actually use or the best use case to use this general general gen AI, I think it's going to be more about having specialized model that fits as very specific purpose, just like we did before with machine learning or this, not different. It's just a just a variation of that. For their own employees, the question becomes how do you what do you let the employees do and what you don't let the employee do? And, I think that's a hard topic.

Arnaud Lucas [00:22:50]:

Right? In a lot of companies, I think that they just let it go for it. The plan with that is more, like, True GPT or any kind of generic general AI like this. If you ask a lot of questions, does algorithm get trained from that too? Like, the if it whatever you tell to GPT kind of feeds into its input at in in some at some point. And so that mean that potentially, data from company is actually leaking and and may come back in some other ways to some other companies who is, hey. I want to do something like this and they'll show you, oh, yeah. Well, since this is how you do it, then actually that came from your own company because you use change APT to figure that out in the first place. So there is there is a danger there. And, yeah, I've you've seen that with, specifically, like, using AI for generating images.

Arnaud Lucas [00:23:48]:

Yeah. Yeah. App is compliant. Hey. This is my work. You know what I'll do? Because, ultimately, there's a AI is very good at quick packaging. Right? You know, knowledge and be able to implement that and be able to give you that as an output in the in a slightly different fashion. So I think that's the part that you need to be very concerned about is how do you protect your data.

Bertrand Godillot [00:24:13]:

Right. So so, Arnold, with with everything we've said so far and, given the caution that we should, all take around this, the the usage of this, I'm actually more I'm actually quite interested in your views on how as a leader you should actually behave. Do you consider this an experiment? Do you consider which seems to be the case? Right? And because of the promise of, I don't productivity gains, releasing more time, you know, the good old the the good old bullet points on on innovation. Do do do you have a framework, or what's your take on this?

Arnaud Lucas [00:25:01]:

So, yes, I do have a framework. I don't know if you are aware of the tech Huddl. Right? So it's kind of this, you know, target thingy. And, you know, from you have technologies on the outside of the of the target. And then as the technology gets proven and gets more and more used in the organization, you know, they go into the middle of the target. Right? And then when they become obsolete, they come out of the target. But that's what the tech hardware is. Right? You know, you put things on hold.

Arnaud Lucas [00:25:30]:

You try them. You you also have different, stages. And for me, my favorite is that where does Jenny I fit into that. And for me, that is in trial mode. Meaning that's your you have some use cases that you're prototyping with, and, you know, and potentially, you are even using it more so that copilot notion of helping you, but it's not in production yet. It's not authorized for a user across the organization. Right? So you're not at that stage yet. Right? You're not it hasn't been proven as to think you can just say, oh, yes.

Arnaud Lucas [00:26:09]:

This it worked perfectly in production for this and this use case. Therefore, now every team in every part of the company can make good of it. So I think that's why I see it. It is still too early. I do believe there is opportunity for, obviously, for the models to improve as well because find out that, you know, I on the daily basis, I see hallucinations. Right? It's not it's not just, oh, once in a while. Right? It's it's very common. So I will argue that we're not in that coven, middle of the target, space yet, which in native AI.

Arnaud Lucas [00:26:46]:

Now can we be? Yes. But we're not there yet.

Bertrand Godillot [00:26:49]:

Like, clearly, as a daily user, what I can say is that the same question doesn't get the same answer, which some if you ask twice the same question, you may not get the same answer, which is kind of disturbing for someone who's been spending 30 years in IT. When you you you are kind of, you know, trying to the point that if you ask twice the same question, you should get the same answer. Now I don't know yet if it's a challenge, but, or maybe I'm just using it, you know, in to to to in some in some areas where where I should be still doing things differently. So I I I do agree that, there's a lesson that there's a learning curve, basically. But I would assume that if you've got, you know, you lead an organization of a 100, a 1000, 10000 people, And you're still looking for efficiency gains for instance, or you you were on that, you're sold on that promise. That must be, that must be still a challenge to say, well, but it's not yet ready, and at the same time, I see my competition is actually using it. So it's it's kind of a race, like, you know, on on every innovation. So, just put for thoughts on this, and, and maybe, you know, brilliant ideas that you could have and share.

Arnaud Lucas [00:28:16]:

I mean, the plan with, generative AI is, deceptively deceptively easy to use. Right? And what I mean by that, art is everybody can use generative AI today. I could just show up on Gemini and start using it. The and, you know, it's like you know, I have my nemesis, which is plastic hoppers. But you you get you get your package, and you're trying to open something like your, I was baking a a cake, from from the cranberry sauce leftover from thanksgiving. And so this vanilla extract, I was trying to open it, and there's this little plastic wrapper on top. It took me, like, 15 minutes to open that plastic wrapper because they are supposed to be an easy to use way to open it. I could not open it.

Arnaud Lucas [00:29:04]:

So I finally, I took some scissors and then, you know, went at it. And I feel like it's the same thing with any TVI. Right? You know, the most it's easy to use until it's not. And and and the that's that's the issue is to figure out that 15% and and make sure that doesn't going to, to bite you back every day. So the I agree that, ultimately, the cat is out of the bag. Contrary to other technologies before, there is a well, if even without our technology, there is a place for it, and I believe that place would be wide, and wide meaning that, yeah, it's going to be a a cost different industries doing different things all over the place. The plan is that point is careful while it was. Right? So we're we're still trying to figure out where it can probably be used and how.

Arnaud Lucas [00:30:07]:

I think that's why you have so many startups doing generative AI and, you know, it's because, you know, the people have to try and figure out what the best use use for for those tools. They would be claiming that, oh, this is going to be fantastic to use it for this or for that. And, but people are actually trying it out yet. So I agree to you guys, but, I agree as I believe as well that you have to be careful. Right? You know, the first to win the prize is honestly the may not be the best because of the liability and the risk that you are taking. So be careful about

Adam Gray [00:30:48]:

that. So so here's here's something. So, leadership needs to be aware of AI in in its use and needs to maybe champion AI within organizations. But I I can see a a a potential problem here, and there are 2 strands to that. 1st is this idea of AI getting things wrong. So use you, Arnaud, said about, hallucinations. Bertrand, you said you asked the same question twice and get 2 different answers. So so that's potentially quite a big issue.

Adam Gray [00:31:31]:

And when I worked in a large organization, one of the things that amazed me was that the organization hired really good people, and then it treated the really good people like they were stupid. So it said, we're hiring you to do this job, but we will we will write a process or a set of KPIs that behave as if you know nothing about what it is you're doing. And the challenge here is certainly in large organizations that there's a there's a real possibility that the leadership may not trust the people within the organization. And as a result of that, they will always believe that the AI or the electronic or the digital solution is the right answer, and that your answer as a mere human is the wrong answer. So how can AI how can leadership say, okay. Here's how we want to use AI, but we need to, as you said, put humans above that.

Arnaud Lucas [00:32:36]:

Mhmm. I mean, it comes down to making sure that organizations are, you know, people are first in any organization, and they should be. Right? To your point, if you are hiring the right people, for the organization, then, you know, people are first within the organization. Right? They are the one that ultimately would deliver the impact that you want from the organization. Now for me, that goes back to the notion of extreme ownership. Right? So as a leader, you you should have the mindset, you know, kind of the attitude of, of leadership being like the single critical factor of other organization to fail or succeed. Right? And what that means is that your if mistakes are being made, if things change, you know, sometime without nothing that you did effectively, you have to take responsibility. Right? You have to learn from your mistake and and take action.

Arnaud Lucas [00:33:35]:

Another in you want to do that, but, typically, you want to bring that mindset not just to you not just as a for yourself, but you want to bring mindset to everybody else on the team effectively. Like, everybody should be an owner. Everybody should be a leader. Everybody should be owning and driving and leading something, to have the the best impact. I think that's what it comes down to is that as a leader, you want to you want to obviously, if something happen, you take responsibility. If their success is is your team's responsibility now that they actually bought that success. And it's how do we delegate ownership at each and every level so that everybody kind of own that piece. And therefore, there needs to be some guidelines about how to use AI, for example, within that context.

Arnaud Lucas [00:34:29]:

But each and every once after employee, you should have the the should be empowered to use the tool as best as possible for their need. And as long as they understand the guidelines, as long as we see that, you know, they should be able to experiment, they should be able to take risk, and so on. So for me, that's that. Right? It's how to empower everyone within a company to be able to effectively experiment with the technology and and see how best to to use it.

Rob Durant [00:35:02]:

As a leader, what might those guardrails look like? What are you instituting, and and how do those align with the broader business goals?

Arnaud Lucas [00:35:13]:

So for for small companies, usually, there are not that many companies, the perfect guys. The the biggest category is more on if we really want to push something production and expose AI directly or AI output directly to our customers. Right? You know, that's where the gaps are kicking because, you know, it's fine. Whatever we do internally is fine, but, you know, we'll not to worry about it. But once once it did pass off customers, that's why we have to be careful. In bigger companies, there is, worries about data protection, which I think is a real worry with AI that you have to be careful about. Right? So you can't just feed AI any data, any proprietary data, business data, PI, personal data, information should be a big no no, obviously. Like, you don't feed the list of customer outreach to reach out to GDPR.

Arnaud Lucas [00:36:11]:

That would be a very bad idea. So I think that that's some that's what some of the garage. I yeah. It's fine to use AI, but do it in a way that doesn't expose, what would be considered critical data, from from the organization. I think for me, that's the main gathering. And then from a customer standpoint, it's more the notion of hallucinations, how to mitigate that, how to verify that, so that you don't, you know, you do do you don't damage your brand because your general is doing something crazy, especially on the customer service side. Right? You know, if you if you make it true to the virus and now you know, your mother starts refunding, fraud. For example, you know, stop refunding money to foster.

Arnaud Lucas [00:36:56]:

That's not a good thing. So there are definitely guidelines up in there, but even internally, I think there are some guidelines. It doesn't have to be a lot, I feel, but there should be some policy of some kind, on our best to use AI in a way that doesn't compromise your, your your cooperation is to shield your corporate data.

Rob Durant [00:37:16]:

So How do you find that balance? Where how do you measure it? How do I know, given well, obviously, PII is too far in any instance, but, how do you know where maybe you're giving some information regarding plans within the organization and and hopes of getting an output, but you've pushed it too far. How does an employee know? How does a leader let the employee know what is too far?

Arnaud Lucas [00:37:50]:

The the line will be fuzzy, anyway, And I think that will really cross the line in a lot of sectors. For example, in software, like, a lot of engineers use, like, on GitHub and use GitHub Copilot. Right? The plan with GitHub Copilot, you know, unless you check that checkbox there, is that it's going to use your code as training data, effectively. And you could argue that at least, you know, when you're a tech company, software is your blood, if you get it. That that's everything you are about since sometimes you're great. So you're so sure if your self child gets slicked in some ways, then you're already giving the the the kids out of the house to to somebody else. So and and companies are doing that already. Right? So I do believe that they are the critical data for me, at least, and every company will have a different understanding there, but would be, like, any kind of numbers, you know, PI, any kind of customer information that you know, because from a risk standpoint, like, you know, if you if you have to comply with GDPR, you know, or the or the privacy framework, even messaging sets we have 1, or the new DFS 500, cyber rules, or any other thing of that.

Arnaud Lucas [00:39:07]:

Right? So there is a need to understand where your data is, you know, what is your critical data, where is it stored, and, you know, how do you basically retain that data. And once you start feeding that data into a JPG, then you lose control. Right? Because I thought otherwise this is going effectively. So so I think that's that. Right? Anything that, you know, any data that recurs compliance, in my opinion, that's where that's where the line is, at this point. At least that's what I've seen.

Arnaud Lucas [00:39:42]:

So I have a question based on so Rob is sharing some information in the chat about the new EU AI act. Mhmm. We're kinda talking here about how an individual organization and the leadership can talk about what is the box, what is the box that makes sense for us. Arnaud, what do you see as the relationship between some of these, like, as I mentioned earlier on the philosophy point, right, there's a lot of people talking about ethics and AI and governments are creating policy around these things. The policy might be supportive of your business. I'm not recommending people work outside of the policy, but also there's, like, a different level of box that probably needs to be observed based on your individual business goals and industry and those types of things. So what do you see is the relationship between those two activities? The internal boxing, shall we call it, and the governing bodies.

Arnaud Lucas [00:40:48]:

So I'm glad that governments are coming up with the regulations. At the same time, because it's still the famous of Wild West, it's hard to see how those regulations will actually help or, you know, not help. I think, you know, I think people are trying to do their best. They are trying to promote innovation offering in a way that doesn't, you know, the that's that doesn't make it dangerous for society. I think that's that's mostly what the rigor I mean, mostly. That's what the regulations are about. Right? It's about making sure that AI doesn't hurt the people. That's their main priority.

Arnaud Lucas [00:41:35]:

And, you know, and that's different from what companies should care about. Some of that is related, like the the the external facing customer aspect of of how you use AI in your products. That's what companies are worried about. And because that has a direct indication, that actually might stray well with the government compliance aspect, which is we don't want to hurt people while we're selling solutions to people. So, therefore, we need to be careful about using AI in our products, for helping people. Within the internal, aspect, it's more compliance and the regulatory risk about, your cooperation, the data, and but also not feeding people that are into the model. So I think that's the difference. Right? So but, ultimately, this, the restrictions that I've seen from the from the EU AI regulation has been more about protecting the data of the people and making sure that there are use cases where AI is not appropriate because it's, it it kind of, you know, undermine the the society in some ways.

Arnaud Lucas [00:43:01]:

Right? So in a in a way that potentially could be detrimental for, what so what they're happening it does already happen before, but, like, this notion of fake news or or other kind of your mistrust on on social media and all things. Right? The has a potential to make that. Well, it's not the potential. It's already doing it, to to inflict even more of that. And so the the goal here is to to prevent some of these old treasures use cases. And that's going to keep on evolving as we learn more about the tool. Did I answer your question?

Arnaud Lucas [00:43:38]:

Yeah. That's awesome. And it makes me ask another question because this is how I think. So if I mean, so as leaders, you know, we're we're we're leaders and we're looking at how AI can be used throughout the organization. How much of this is, like, education from a leadership perspective? Because I think a lot of I mean, honestly, it was news to me at the beginning of this conversation that Chad GPT can't do math. I think there's a lot of misconceptions about what it can do. And then I also think that a lot of how an organization uses something is based on the example set by the leadership. So how much of this is just about education? And then over and above what you do, of course, do you have some good recommendations for educational materials, specifically for leaders in this arena.

Arnaud Lucas [00:44:37]:

So so the first part, I think, for me, is more than education. It's almost like disclaimer. Right? So every time that AI is used, specific Gen AI, But I will argue that machine learning is not that machine learning form. Like, yeah, is video to another location on top of machine learning, which companies are using on in their product today. I think there should be a disclaimer about the fact that it's being used effectively. You know, when you post Kotlin, but also when, you know, when you're you have some you interact with something, it should be very clear that this is a Gen AI doing that thing for you. Right? Because, at least if you're if you as a user, you know that, then you can actually well, there are 2 aspects. There there is a bad experience.

Arnaud Lucas [00:45:28]:

Oh, this is a Gen AI. They may try to trick that. That will be my first instinct because not because I'm a hacker by heart. It's more because, oh, it's like, oh, too. Let me try to break it. But but the other aspect too is I understand the limitation. Alright. Then I can I can play with that, and say, oh, you're paying me something first? Let me double check.

Arnaud Lucas [00:45:48]:

Or what do you mean you can give me a refund? Let me let me double check your policy to to make sure you're paying me the truth. The so that's one aspect. So I think that notion of disclaimer, I think, is important. I think that that's part of the regulations, right, is to to add more of the disclaimer. But, yeah, I may hurt you kind of thing, if you are not careful. But the other aspect is education. And, you know, there are plenty of trainings and articles talking about educating you about AI. Right? I've done even even on LinkedIn, right, there are a couple of courses that are actually very good that go school step by step, all the kind of hallucinations, how to mitigate them, like, you really goes deep into, how to how to understanding what Gen AI can and cannot do, so that you you get that sense.

Arnaud Lucas [00:46:49]:

Right? The plan is not that, or it does matter how it exists. The plan is how the plan is that JNI has a potential to touch the life of everybody. You know, even not part practitioner like me. Right? I use it I use JNI every day. But, you know, even people that don't do that, they're just exposed to Gen AI. Feels like everybody at some point will get exposed to it. And to your point, they need to be educated about what Gen AI is and what it can kinda do. And, I'm not trying to fix that, right, because, ultimately, that's a educational scale right there.

Arnaud Lucas [00:47:29]:

You know, people have limited time. So is it kind of some kind of social campaign so that it shows up into your your personal account.

Arnaud Lucas [00:47:39]:

It reminds me of, like my sister was in this commercial when we were kids called not all bugs need drugs, and it was about how, like, viruses don't need antibiotics and stuff like that. Almost like a public service campaign.

Arnaud Lucas [00:47:51]:

Exactly. And and for me, I mean, that for me, that's needed, at least in the current state of the technology. Maybe at some point, it's going to get so much better than, that you won't need that. But at this stage, I would be very worried about people getting exposed to GNI without having any understanding of it. Right? And and I'm worried because we're there. Like, I mentioned a nutshell of my daughter's friend in 7th grade using it. It's like

Arnaud Lucas [00:48:19]:

so I can't think of it. You, I mentioned it at the school playground the other day. Yes. Mom was trying to figure out how to turn on some parental controls on something, and I was like, oh, have you asked chat GPD? And she was like, what? What is that? And I was like, oh, I I always feel like I'm so far behind on AI, but I'm not in the global population.

Arnaud Lucas [00:48:40]:

Well, at some point, everybody is going to get exposed to it even if they don't know what it is.

Arnaud Lucas [00:48:44]:

Well, now I'm introducing it to people. So there you go.

Arnaud Lucas [00:48:47]:

Exactly. You're part of the problem. So, I mean, that's a plan. Like, how do you indicate that scale more than anything? And I don't I don't have a good answer for you. That's not my area of expertise.

Tim Hughes [00:49:01]:

So, Tracy, you're the chat GPT dealer hanging out?

Arnaud Lucas [00:49:05]:

Yeah. I I was I was I'm the Strathmore chat GPT dealer.

Bertrand Godillot [00:49:10]:

So, typically check-in the installations.

Rob Durant [00:49:15]:

Yeah. The first the first story is great.

Arnaud Lucas [00:49:17]:

Use it for good.

Rob Durant [00:49:20]:

I know we've But

Tracy Borreson [00:49:21]:

I don't even

Arnaud Lucas [00:49:21]:

know all of the things that could do it. I don't know. And and this is the thing that's interesting and why I love conversations like this is because it brings to light things you don't know you don't know. And then you can dive deeper into those things to try and learn more. But until like, if you think you know, then you just go and use a tool. And there's a lot of marketers using it to create content and things like that, and they they don't have a good understanding of what it can do, what it's good at, when it's hallucinating, how to pressure test things. And I think it's important to help people understand what they don't know they don't know.

Rob Durant [00:50:00]:

Arnaud, we've talked about, leaders, installing guardrails. Mhmm. What are some of the biggest barriers to implementing a successful AI strategy, as opposed to simply lock it all down, pretend that it's not accessible?

Arnaud Lucas [00:50:20]:

There so you in order in my opinion, in order to use it successfully in any organization, you have to you can't just use the vanilla user. You can't just pop up with that GPD, app and start asking putting prompts in there. Right? It's more usually, it's more involved than that because you need to be able to feed you need to be able to use some of your pop up data, to to have to match our use cases. You which potentially having your own model, you know, trying this certain way, you need to have workflows in place. You need to have a way to hone it in a way that's cost effective, which is in the aspect. Like, you can, you know, we use Snowflake, for example, and stuff like you can run, Gen AI algorithm in your SQL query just for that. Like you say, yep. As for the SQL, you know, Gen AI, I choose the algorithm to on my query and give me the insights.

Arnaud Lucas [00:51:25]:

Right? The plan with that is that, you know, stuff like that. Yeah. Please use JNI, but you're you're you're going to be charged by compute time. And and JNI, I am way more expensive than just running a standard SQL query. Let us put this way. So there is a cost aspect. So there is all this kind of side things, but also the fact that at the end of the day, you really have to understand what is the critical use cases that you believe JNI can help with and start prototyping on it. Right? You'll be able to, play with it.

Arnaud Lucas [00:52:00]:

Start with a small use you know, start with a part of the use case. Really iterate on it, and then just keep on explaining. It's not that it's like any technology. Like, any technology, you do that. Right? You don't you don't just say, oh, I have this big thing, and it's going to take 6 months to get there. And, therefore, I'll just I know exactly what I need to do, and then I'll see you in 6 months. Right? That's never what you should be doing. Right? It's like, great.

Arnaud Lucas [00:52:26]:

The 6 month thing that I want to have, what are the strategy that I want to implement. Right? What's the first step? Right. Let's learn from the first step. What's the second step now? And and make your way. And, typically, you're going to end up in a different place than what you not to strategize for, but that's actually better. Right? And and so I think for me, generalize this a way. Right? Is that you because as you start implementing it, you learn you learn about this limitation. You learn about the need for guidelines, right, because you start you're expressing you know, internally or in the ways that are that are focused.

Arnaud Lucas [00:53:06]:

And then you are going to say, oh, this user did something wrong with it. Oh, well, I should have should make sure that we don't do that again. And so so you learn, and then and then you keep on iterating and expanding. Right? And that's part of making your way into the center of the radar of the tech radar that I mentioned as well.

Rob Durant [00:53:24]:

So We too have ended up in a different place. I know this has been great. Where can people learn more? How can they get in touch with you?

Arnaud Lucas [00:53:36]:

So the best product I've been using for myself has been LinkedIn. So my LinkedIn profile by far is the best place. I'm actually planning to start publishing some articles about, hallucinations next week. So it's a very tiny topic for me because that's that's what I've been focusing on, I think, the last couple of weeks. It's it's about that topic exactly and the danger of of Gen AI because people talk a lot about the promises, which they are there, or Gen AI is here to stay. Now I believe there's a bit of a bubble right now, but, you know, some point that, you know, we will see what the reviews cases are. But I'm more worried about what Tracy was mentioning that people don't find some of the limitations and how to deal with them. So that's something I want to write about.

Arnaud Lucas [00:54:26]:

Excellent.

Rob Durant [00:54:28]:

We now have a newsletter. Don't miss an episode, get show highlights, beyond the show insights, and reminders of upcoming episodes. You can scan the QR code on screen or visit us at digital download dot live forward slash newsletter. On behalf of the panelists, to our guests, and to our audience, thank you all for making this another highly interactive and entertaining episode of the digital download, and we will see you next time.

Arnaud Lucas [00:55:00]:

Bye, all. Bye bye.

#Leadership #Innovation #AI #SocialSelling #DigitalSelling #SocialEnablemenet #LinkedInLive #Podcast

blog author image

DigitalDownload.live

The Digital Download is the longest running weekly business talk show on LinkedIn Live. We broadcast weekly on Fridays at 14:00 GMT/ 09:00 EST. Join us each week as we discuss the topics of the day related to digital transformation, change management, and general business items of interest. We strive to make The Digital Download an interactive experience. Audience participation is highly encouraged!

Back to Blog