Previous Shows

The Digital Download

The AI Commodity Trap: Why Your Private Data is the New Gold Rush

March 06, 202643 min read

This week on The Digital Download, we are dissecting a massive reality check delivered by Oracle co-founder Larry Ellison.

Ellison recently put his finger on what he calls the "fatal flaw" in today's AI race: Every major model—from ChatGPT to Gemini to Llama—is trained on the exact same public internet data.

His conclusion? AI is rapidly becoming a commodity. If everyone is using the same foundational data, they are all producing the same generic output.

So, where is the real value? Private data.

I am joined by Tim Hughes, Adam Gray, Tracy Borreson, and Richard Jones to debate what this "commoditization of AI" means for your business strategy, your content, and your Go-To-Market approach.

We will discuss:

  • The Fatal Flaw of LLMs: Why relying solely on public AI models guarantees you will sound exactly like your competitors.

  • The Private Data Gold Rush: Why your proprietary knowledge, internal context, and unique human experiences are your only remaining competitive moats.

  • The Security Dilemma: How to feed your private data into AI workflows without compromising sensitive enterprise information.

  • Context vs. Commodity: How this ties directly back to building your "Ecosystem of Reference"—and why your AI is useless without your unique business context.

  • If you are tired of generating the same "brand doublespeak" as everyone else, join us to learn how to leverage the one thing the AI doesn't have: your reality.

We strive to make The Digital Download an interactive experience. Bring your questions. Bring your insights. Audience participation is keenly encouraged!

This week's Host was -

Panelists included -

Transcript of The Digital Download 2026-03-06

Bertrand Godillot [00:00:22]:

Good afternoon, good morning, and good day wherever you may be joining us from. Welcome to another edition of the Digital Download, the longest-running weekly business the world's number one business talk, news, and strategy radio network. Today on the Digital Download, we're dissecting a massive reality check delivered by Oracle co-founder Larry Ellison. Ellison recently put his finger on what he calls the fatal flaw in today's AI's in today's AI race.

Bertrand Godillot [00:01:00]:

Every major model from ChatGPT to Gemini to Llama is trained on the exact same public internet data. His conclusion: AI is rapidly becoming a commodity. If everyone is using the same functional, fundamental, foundational data, they are all producing the same generic output.

Tim Hughes [00:01:25]:

Yeah, my name is Tim Hughes. I'm the CEO and co-founder of DLA Ignite, and I'm famous for writing the book Social Selling Techniques to Influence Buyers and Changemakers. And welcome.

Bertrand Godillot [00:03:04]:

Okay, thank you, Tim. Adam.

Adam Gray [00:03:09]:

Uh, hey, MC, I'm hanging with my fam, it's really good. Uh, yeah, no, no, this, this will be really, uh, this will be really interesting conversation, I think, because, uh, some of, of what Larry Ellison said is absolutely on the money, and some of it I think is fundamentally wrong. So we should, we shall see.

Bertrand Godillot [00:03:33]:

Okay, so thank you, thank you all for, uh, for this. And let's start with, with no further ado, let's start with a foundational question. So why relying solely on public AI models guarantees you will sound exactly like your competitors? Who wants to take this one?

Adam Gray [00:03:56]:

Yeah, well, you see, I don't think it will. And I think that that if you go to ChatGPT, other brands available, and you type in an instruction, and then Tim goes to ChatGPT and types in the same instruction, you're likely to get something which is generally similar coming out. That's a given, of course. You know, these are engines which are built on a particular foundation, But to say that you can't train it on publicly available stuff is rather like saying you can't speak a publicly available language because, you know, everyone else has got the same language, so you're not able to differentiate yourself. And obviously, some of the work that we've been doing in terms of building those foundations where you take that linguistic and social norms that you get from the publicly trained AI tools, and then you build a layer of corporate personalization and then a layer of individual personalization above that, ensures that not only do you get something that sounds like it's you and understands your knowledge and understands your industry and understands your background, it understands your company's objectives, it understands your company's go-to-market strategy proposition, language keywords, but perhaps more importantly, it understands those— you said foundational question— it understands those foundations on which conversations and interactions are built, because those are available across the internet. So I think that, yeah, if you don't personalize it, it's really bad. If you do personalize it, it's the perfect starting point. Discuss.

Tracy Borreson [00:05:49]:

I agree. And I think the key here is the human discernment, or the human in the loop, as people like to reference.

Tim Hughes [00:05:56]:

The human in the loop is the— yeah.

Tracy Borreson [00:05:59]:

Yeah. Because so if I put in a prompt that I got from somewhere else, because someone told me I should use this prompt, I put it in and I get it out. And I haven't done any work to train my AI or anything like that. And Tim does the same thing. Then yeah, we're gonna get similar stuff. And then if we don't add that discernment layer, we just take that and put it on, post it wherever we're posting it, then that's the risk. But I think that also assumes that humans have given up their ability for discernment. And while I think, I think the interesting part of this conversation is in the relationship when a human is using AI to do something it wants— it, we're now it— it wants to do, we want to do, versus doing something we don't want to do.

Tracy Borreson [00:06:54]:

So if I want to go out into the world and create more opportunities for people to see authenticity in a different way, I can use AI for that. I can help— it can help me brainstorm ideas. It can pull that like global information, which is very helpful because my brain can't hold that. Into the conversation, and then we can develop new things. But if I'm using AI to transcript my meeting because I don't want to transcript my meeting, and I don't use that transcript for anything but like I post the transcript somewhere, right? Like then there's a—

Tim Hughes [00:07:32]:

I think we're missing an opportunity to ask a question, which is why are we doing this in the first One of the things we always need to be careful about, isn't it, is the fact that sometimes we, we can live in our own echo chamber, can't we? Both of one of ourselves and that's of LinkedIn. And I mean, if you look at the recent post by Mark Cuban, He pointed out that MIT is saying that 95% of all corporate AI projects are failing. And the reason for that is because nobody knows how to install it. And if you look at the recent post by Lisa— sorry, I've forgotten her surname— where, you know, her view is that people don't understand AI. And therefore we're in this situation where we can say, surely everybody understands AI and surely everybody understands how to use it and therefore they know how to get the best out of it.

Adam Gray [00:08:48]:

Do they?

Tim Hughes [00:08:51]:

It's a bit like, you know, when during COVID we were talking about, you know, talking about working from home, you know, the vast majority of people can't work from home because they're builders or janitors or, you know, nurses or doctors. Yeah, yeah, yeah. So we need to be post people or, you know, so we need, we just need to be really, really careful, I think, when we talk and we debate about it, that we are opening up and we are being— I'm not saying that we're not, but we do open up and think about the vast majority of people, not just necessarily— because I do think that LinkedIn is an AI echo chamber. Of people patting themselves on the back and stuff.

Tracy Borreson [00:09:37]:

Yeah, I think again, it comes down to me of the question of like, if we're looking at AI as a way to improve something, right? Like these are our current business results and we're trying to get to these business results and we have these humans that can contribute and we have this AI that can contribute. How can we like puzzle that together in order to achieve the things that we're trying to achieve? That's the core question. And if we're layering AI into doing activities without asking that key question, then I think we increase the risk of ending up in that category of AI as a commodity.

Tim Hughes [00:10:19]:

So the number one platform that AI is trained on, and this platform deliberately IPO'd off the back of this because they knew they were opening up their platform deliberately and did an IPO. Is Reddit. Now, Reddit has never really been well known for its intellectual, challenging data. Now, the second platform that AI is trained on, which I find interesting, is LinkedIn. So, you know, we're in this situation where When we ask, I know somebody who, we have a very good friend, Stephen Sumner. He doesn't use ChatGPT because it's trained on Reddit. Because he doesn't see it as a, you know, he sees it's a bit like we have in the UK, we have tabloid newspapers, which are not necessarily seen as news. It's generally seen as like an adult comic.

Tim Hughes [00:11:23]:

And Reddit would generally be seen as like an adult comic. You know, you go there, it's there for a laugh. You know, I've got a friend who posts memes constantly on Facebook that he just steals from. He's done it for years, steals them from Reddit. And that's one of the things that we also need to be careful because I've been on— I was at an event last week where when everyone said the word data, everybody in their head thought something. And that thought that they thought about— my fingers aren't straight, but what they thought was when they thought data was structural, you know, Substack type, Substack or Medium type post, where data could be, you know, memes of this is what my sales manager says at the beginning of the quarter when he asked me for my pipeline, you know, and it's a funny face. So we have to be really careful about understanding in our heads what is data and not making the assumption that actually all data is actually the same.

Tracy Borreson [00:12:37]:

I think that is a big component in this commodity trap is that now if we look at like the internet as the source of data, there's good data on the internet, there's also terrible data on the internet. And so if you're looking at this as a whole, then maybe you're getting like an average, but then it's like best case scenario average. But also, again, like I was trying to remember the word Nicholas said, a word, and I'm like, I don't think that's a word. And so I asked the internet Is this a word? And Copilot was like, yes, it is. And I was like, which dictionary is it in? And it's like, it's in the Oxford Dictionary and the Merriam-Webster Dictionary. And I'm like, I don't think this is a word. So then I went to the Merriam-Webster Dictionary website and I typed in the word and it was like, this is not a word. And then I went to the Oxford Dictionary and I typed it in.

Tracy Borreson [00:13:40]:

This is not a word, right? And so Like, I tried to dig it, and I was— I did this with Nicholas, my 7-year-old. And so I'm like, you— this, see, this is why you have to like—

Tim Hughes [00:13:53]:

and what word was it?

Adam Gray [00:13:57]:

I was about to ask.

Tracy Borreson [00:13:57]:

No, I, I, I'm sure it will come to me during the—

Tim Hughes [00:13:59]:

did it begin with F?

Tracy Borreson [00:14:01]:

It did not. It began with D. Okay.

Adam Gray [00:14:04]:

But, but the thing is, the thing is that the word selfie was not a word until people started to use it. So maybe we can take it.

Tracy Borreson [00:14:11]:

No, it's not. It's not a word like that. It was like, uh, oh, humor— no, it's not a B. It's humorful. So he said humorful, and I said it's humorous. And he is like, but it's full of humor, so it's humorful. And I'm like, I maybe— like, ask. And then like, yeah, Copilot is convinced that humorful is a word.

Tim Hughes [00:14:41]:

Play Scrabble with your 7-year-old.

Tracy Borreson [00:14:43]:

Is not in either of those dictionaries.

Tim Hughes [00:14:45]:

Was it a triple word score?

Bertrand Godillot [00:14:46]:

A good starting point for a brief.

Tracy Borreson [00:14:48]:

I don't know how it came up.

Bertrand Godillot [00:14:50]:

Could be a good starting point for a, for a Jingle Briefing.

Tim Hughes [00:14:55]:

Yeah, yeah. Humorful, I like that.

Tracy Borreson [00:14:59]:

I know, and then you're like, well, it is, I mean, it doesn't sound like wholly wrong, and then I'm like, like full of humor versus humorous, which like made you laugh. There could be a different definition for these things, but it is not in either of those dictionaries.

Tim Hughes [00:15:13]:

It might be the British Sign Dictionary or the American Dictionary.

Tracy Borreson [00:15:18]:

Well, technically the Merriam-Webster is the most trusted American dictionary.

Tim Hughes [00:15:22]:

Oh, right. Okay.

Adam Gray [00:15:24]:

That doesn't count for much then, does it?

Tracy Borreson [00:15:27]:

Anyways, I did my research and it proved AI wrong. That's the point I'm trying to make.

Tim Hughes [00:15:32]:

It's a great example.

Tracy Borreson [00:15:34]:

So like, and this is the thing, right? Like it is trained on— I'm sure it found the word 'humorful' somewhere, probably on Reddit.

Bertrand Godillot [00:15:47]:

Yeah, I'd like to come back to what you were saying earlier, Adam, because indeed there are a number of things that you can learn from, I mean, in terms of putting language together and predicting the next probable word from actually internet data. There's no problem with that, and public internet data is probably, you know, whether these are newspapers or anything public on the internet, you can train someone to— you can train an LLM to speak almost correctly. But then there are things that Actually, very correctly. But there are things that I think when it comes to some of your beliefs, for instance, if we take specifically some of the areas we're working on, if you are coming up with something disruptive, it's going to be a challenge to keep that disruptive angle. Into a non-trained on private data, so to speak. Andy?

Adam Gray [00:17:04]:

Yeah, but I think that the important thing about what Larry said is what was inferred, I thought, was that, and clearly Oracle has a desire for this to be the case, you need to just train your LLM on your own data, not use publicly available data. And I think that's very dangerous, you know, Tim in the echo chamber scenario. I think that using the publicly available data as the bedrock and then adding the layers of personalization above that makes more sense. And I think also it's a much faster route to market. You know, take the LLM and everything it knows, train it in what your business does, train it in what you know, and then you've got a chance of it being ready and appropriate fairly quickly. Whereas if you start from scratch, you know, it's a massive data collection and infrastructure project. And if, as Tim said, you know, the MIT research shows that 95% of AI projects fail, Much better you should fail on a small project than a huge project if it's going to fail.

Tracy Borreson [00:18:21]:

For me, when you use the word disruption, Bertrand, I think the humans are the disruptor in the system. Absolutely. If you're looking at public data, all of us could look at public data and be like, oh, what's going on here? Maybe the data is real and maybe the data is not, but like that disruption is like an internal human feature. And so if, like Adam said, we layer that on top of the bedrock of public data, then we don't risk losing our disruption because our disruption is ours, right? Carry it around with us.

Bertrand Godillot [00:19:03]:

Yes, absolutely. And that's, uh, You know, also I think on that we're really touching one of the points that I think we should be really keeping in mind as the initial, you know, step or maybe to any kind of AI implementation usage, etc., is where do we make a difference? And we as humans make a difference. We do make a difference. In terms of— you talked about discernment, but it also works for our beliefs, the way, the specific angle that we, that each and every one of us is actually using to look at a problem. And that's where basically creativity comes from. So there's always a— how do we, if we think about getting some help, from AI. I think the first question is, is it something that should be making a difference, yes or no? Is it a differentiation area? Because if it's not, that's no need for specific training. We can take the average, but, well, the average will do.

Bertrand Godillot [00:20:23]:

But if we are to make a difference, then this is probably where we need to spend a bit more time. And at the same time, the frustration is that it is not that easy. So the promise is not really there, basically, or it's a different investment. Let's put it this way.

Tim Hughes [00:20:47]:

Is that because for 20 years, We've assumed everything is like the Amazon website, which is none of us have been on a training course how to use the Amazon website. It is, you learn how to use it and you use it. And therefore the assumption is that all apps, you know, we've never been on a Facebook training course or a LinkedIn, some people have been on LinkedIn training course. So the assumption is that everything is easy to use. And is that the right assumption? Because if 95% of AI projects are failing according to MIT, what's going wrong?

Tracy Borreson [00:21:35]:

I think the challenge is it's too easy to use.

Tim Hughes [00:21:39]:

Well, yeah, it could be.

Tracy Borreson [00:21:42]:

Because when it's so easy to use, then people just use it, right? They don't take a training course, they don't do the things, and they don't ask the question, what am I using this for? I think that someone over there told them, use AI, otherwise your business is gonna go out of business.

Adam Gray [00:21:59]:

Yeah, I mean, you said that earlier. It's like, why? What am I using this for? What am I hoping to achieve with that? And I think that that's the bit that people don't normally do. Like you said, you know, you switch it on, it's really easy. There's a box, you type something into the box, and something magic comes back. And what comes back is generally kind of all right. You know, if ChatGPT or whatever doesn't know anything about you and you ask it to write you something based on the following supposition, it'll do it. And what comes back will generally like kind of make sense and kind of be, oh, okay, yeah, I can live with that. And that's the issue, isn't it? It's taking people from a position of knowing nothing or doing nothing to knowing something and doing something.

Adam Gray [00:22:49]:

So, you know, I would like you to tell me everything there is to know about this thing. So it tells you, you go, okay, well, now I know a little bit about that thing. Or I've never written a blog before, I'll get you to write a blog for me, and then I've written a blog. Well, I haven't written a blog, but I have a blog with my name on it. But actually, the win always comes in the layers above that, doesn't it? Which is taking you from being as good as you already are to being better than you already are, augmenting you. And I think the danger, which is a part of the human condition, the real danger is that the thing comes out of AI and you look at it and you go, yeah, that's a bit crap, but actually, yeah, yeah, good enough.

Tim Hughes [00:23:35]:

Yeah, it's good enough to put on— let's just check it on LinkedIn.

Adam Gray [00:23:38]:

Yeah, and I think there's an element of it is that, and I think an element of it is, well, I don't agree with that, but it must be right because it's AI. And this, you know, putting ourselves as the assistant to AI, I think is a very dangerous thing to do because everybody that does that, it becomes a race to the bottom, doesn't it? As Larry Ellison said, you know, we've got access to the same things, we will end up looking the same, sounding the same, saying the same. So in the absence of any other differentiator, well, I'll buy the cheap one because why wouldn't I?

Tim Hughes [00:24:12]:

If we're all the same, I'll buy the cheapest.

Tracy Borreson [00:24:15]:

Well, and I think, Adam, you make an interesting point there too about the blog, right? Like, I haven't written a blog. I did not build a skill of writing blogs. I probably also didn't build any kind of confidence in me being able to create a blog. But I have a blog with my name on it. And I think this is also an important gap for people to realize, because if you're doing something that matters, then that gap is really important, right? You're supposed to be closing the gap. If I need to— let's talk about social selling, right? Like, I want to get more comfortable having conversations with people. I don't get more comfortable having conversations with people by sending an AI agent out to have conversations with people. That doesn't happen.

Tracy Borreson [00:25:01]:

Conversations might be being had, but I don't close the skill gap. But I can use AI to help me close the skill gap. So if I want to get good at this, there's ways for me to use the AI tools in order to do that. But if I am not closing the gap, then we need to look at our use of the AI and say like, oh, well, we have lots of corporate blogs out there. That nobody's reading and nobody has a skill of writing. And you're like, how does this serve? How does this serve our business? And we're like, but it's easy. It's fast. Right? Like, so we'll just do it and we'll put it out there.

Tracy Borreson [00:25:38]:

And now we're actively participating in the race to the bottom because now all of that information that's out on the internet is also training these models. And so now we're like training them closer and closer to average Crap.

Tim Hughes [00:25:57]:

But isn't that what happened when this was invented? We stopped using mental arithmetic because we could use a calculator. So I remember going into a shop where the person, if you bought something, they'd add it up in their heads.

Adam Gray [00:26:13]:

Yeah.

Tim Hughes [00:26:14]:

And then they'd put that and then they'd ring it. There was a— they'd ring it up as the total on the cash register. And now you're in a— and what they've done is that they've created machines that, you know, goes doop, doop, doop, doop. Do you like that?

Tracy Borreson [00:26:29]:

And now you even can't even check and make sure that it's approximately what you expected to spend, because I don't even look at the total and I just tap my card and I have no idea how much money I have. And there's like, like, we're not using our brains to connect to God.

Tim Hughes [00:26:47]:

So we've created a society where we don't need to know how to add up.

Adam Gray [00:26:52]:

Well, I don't need to know how to think. So, so my wife went in to buy some— a loaf of bread or something a little while ago, and it was, it was like £4.10 for the loaf of bread, so $4.10 or whatever. So she gave $8— I gave a £5 note and a 10p piece because clearly what you want is a £1 coin back, like a $1 note back, and the person behind the till, like, what the hell is this? Why have you given me this? And they couldn't compute that it actually makes it easier for me not to clean out all of your change that you have in the till if you just give me one of those coins. And she had to explain it multiple times before they went, oh yeah, okay. And clearly they didn't understand it still. And I think that this, this is the issue, isn't it? You know, we, we abdicate abdicate our ability to think to these processes that we have, whether that's using a till or whether that's using AI.

Bertrand Godillot [00:27:57]:

Well, I'm not so sure about this one because I think indeed it could be the case if you don't make the effort of trying to improve or use it for what it's supposed to be useful. So it means that you've thought through your differentiation, so where you make a difference, and therefore to get to the point where you can feel comfortable with something that— and that's only an example because obviously writing blogs is probably something that I would not delegate, but if you want to get to the point where you feel comfortable with what's been produced. There is a whole series of activities that you need to fulfill to get to the point where you've provided enough context so that it is the case. And you need to think this part of the process, I think, quite— you need to think it through and quite intensively, I would say. And that's not the easy piece, by the way. It's quite easy to collect, let's say, 5 years of sales forecast or 5 years of supply chain forecast and get an AI to predict what should be probably your next PO to that specific supplier. It's quite I think it's much more challenging to get your voice into something that's been generated.

Adam Gray [00:29:47]:

Agree.

Bertrand Godillot [00:29:49]:

Did I?

Adam Gray [00:29:49]:

Yeah, I probably drowned everybody now.

Tracy Borreson [00:29:54]:

Agree.

Adam Gray [00:29:57]:

Let's just quickly say we're not getting the comments that are being made pulled through to here. So, uh, let's— okay, first of all, uh, Ian, uh, that's just juvenile.

Tim Hughes [00:30:13]:

Um, so that could have been anything we talked about. Thanks.

Tracy Borreson [00:30:20]:

He says that—

Tim Hughes [00:30:22]:

can I read mine out?

Tracy Borreson [00:30:23]:

Rarely is.

Tim Hughes [00:30:25]:

So, uh, Nelson, um, says This private data insight reshapes how we think about competitive advantage today. Alexandru says leveraging proprietary data for AI is the undeniable path to differentiate and secure market share. Generic output kills ROI. And Andrew, who's written a book on AI, who I'm interviewing on my podcast Tim Talks in 2 weeks, says— and his comment is about the, the Mark Cuban fact that 95% of them are failing. Um, um, he says, maybe the question we really need to ask is, what are the 5% of the companies succeeding doing right? Really good comment.

Adam Gray [00:31:14]:

Thank you.

Tim Hughes [00:31:16]:

I'm sorry, they should be coming up on the side, but they're not.

Tracy Borreson [00:31:20]:

Yeah, also Andrew Schlesser is waving at us.

Tim Hughes [00:31:23]:

Hi, Andrew.

Tracy Borreson [00:31:24]:

And has a comment that all companies could ask the same question to an AI platform, but it all comes back to how the company interprets the info from the AI. And I think that's also the human discernment layer. And they were using examples, you were talking about calculators, how most people are just interested in using their calculator to spell words. And that's A perfect example of what the tool was not created for.

Tim Hughes [00:31:51]:

Shell oil was the one I used to do. We used to hold it upside down and you could say shell oil.

Tracy Borreson [00:31:57]:

I feel like there's more juvenile ones that are happening in the chat on my feed, if anybody's interested.

Adam Gray [00:32:04]:

Yeah, they definitely are.

Tracy Borreson [00:32:09]:

But like, and this is the thing, right? It just comes back to— I've been using this example about a hammer, right? AI is a tool. A hammer is a tool. If I use a hammer to brush my teeth, I'm probably not going to get particularly good results on my teeth, or in terms of how well my tool lasts, or people's interpretation of how good of a tool this is. Now, this is the wrong tool for the wrong job. And if we The power with AI is that it's a tool that can be used for lots of jobs. And so then we assume that it can be used for all jobs and that we need it to do all of the jobs. But I don't need it to do all of the jobs. Every business doesn't need it to do all the jobs.

Tracy Borreson [00:32:56]:

I would hazard a guess that those 5% are people who had a very clear understanding of what they needed to do. And they looked at how can AI play a role in that? Instead of just saying, I need to speed up the code writing, I need to get more content out in the world. Like, these are external narratives that aren't yours, right? If, if you actually had good business data for your own business that said, every time I do a blog, then I get 5 leads, then okay, what goes into your blogs that makes them good, right? And why does that turn into 5 leads? And if there's a way to use AI in that process to make it faster or to create more of them, then that's fine. But if you don't have that level of questioning, then it's pretty easy just to get caught up in the habit of using it to do stuff faster. Doing stuff that doesn't contribute to your business, doing that faster is completely irrelevant.

Bertrand Godillot [00:34:05]:

Oh, this is better. Not for— see, I think because lots of— I think the, the trap here is to basically think about how can I adopt AI. That is not the way to go from my perspective. The way to go is, you know, what is it that you want to achieve from a business perspective? And by the way, it may not be only savings. It might be, you know, creating something new, a brand new product line, a brand new service. And then, you know, are there any competencies from an AI perspective that I could use to reach that goal? That is probably how the 5% of successful projects are coming from, rather than trying to, you know, find a problem that you could fix with AI.

Tim Hughes [00:35:10]:

I mean, if you look out over LinkedIn, a lot of the solutions that people are coming up with AI, and my inbox fills up with people that have created apps, what they're doing is that they're AI-ing 1980 processes. You know, today, you know, I've got an AI that will make cold calls for you.

Bertrand Godillot [00:35:30]:

Really?

Tim Hughes [00:35:33]:

You know, where did that come from? The 1980s? Why are you replicating processes that are out of date? You know, what we should be doing with AI is is sitting down and actually rethinking about— AI gives us a real great opportunity to rethink the way that we sell and the way that we market, just like social media did. And if we do that, we have a competitive advantage because using AI to make cold calls, I would actually say you're actually going backwards, not forwards, because I don't take cold calls anyway, so it's irrelevant. And I don't think any other senior leader does it anyway. So, you know, I think you're shouting into an abyss. But, you know, if we're just using AI to do that, or maybe what we could do is completely rethink the way that we actually go to market. Wouldn't that be amazing? And if we did that and we innovated that ahead of our competition, We would just destroy the competition. There would be just us. And that is, I think, is where, you know, if as a CEO of an organization and leaders of organizations, that's what they need to be thinking about and tasking people with and saying, we've got this AI, great that you've— oh, we've actually got a comment that just—

Tracy Borreson [00:36:59]:

A comment did come in.

Tim Hughes [00:37:01]:

A comment just came in.

Adam Gray [00:37:02]:

Fantastic.

Tim Hughes [00:37:02]:

Thanks, Mike. We'll get around to reading it in a moment, but it's the fact that it's just come in, it's just amazing. That's the task, the exam question that leadership teams should be setting and delegating out to their organization. Anyway, Mike says, and thank you, Mike, for the comment, and then bring it in. AI adoption isn't guaranteed by access. Businesses need to upskill and then rebuild infrastructure with AI. 'From the ground up, not to replace humans but to supplement.' Fantastic. It's a great, great comment, Mike.

Bertrand Godillot [00:37:40]:

Great summary.

Tracy Borreson [00:37:42]:

I think it gives us a great opportunity—

Tim Hughes [00:37:44]:

Talking, going on for half an hour, and Mike comes up with a decent comment.

Tracy Borreson [00:37:50]:

Or was it AI?

Adam Gray [00:37:53]:

It could be.

Tracy Borreson [00:37:55]:

Who knows? But I think this is the, the point too, right, is that like there's just so— it's easy. And this was actually interesting because about 2 years ago, um, investment houses were also investing in anything that had the word AI in it, right? Like anything that had the word AI and it was getting funding, right? And so everyone was like, let's make AI solutions and everything is AI enabled and all of this. And now I don't know if it's 95%, but that investment has pulled like way back. Like people are like, okay, this isn't just about like having AI or having AI enabled, but it has to do something meaningful. If it's not doing something meaningful, it's not supporting the overall ecosystem of what it's supposed to create, whether that ecosystem is a municipality or that ecosystem is a business or that ecosystem is a sports organization or whatever the ecosystem is, if it's not adding value, it's actively distracting you from creating value. And I think this is a really important thing to notice because if we're just taking all of these tasks that, if we were being honest with ourselves, did not have to be done in the first place, And we are now have AI doing them. So all of this nonsense that we were doing, we're doing more of it now. And so we're distracted.

Tracy Borreson [00:39:28]:

It's not even about maybe, maybe we create time for the humans to do something more meaningful, but we're distracted by that narrative of just doing stuff. And we don't, unless we stop and slow down. And yeah, if AI is a race, seemingly a race to the bottom some days, if it's a race and we're all just chasing, no one's stopping long enough to say, whoa, there, okay, we did start to use AI in this scenario, but like, that's not— it's not, it's not adding value to our organization. And maybe it's even actively like decreasing the value. That we were creating because we didn't stop to realize that again, like our blogs get that kind of traction because I have a super passionate person who writes about the topic. Like that's different than just generating stuff. And so in the race to go as fast as possible, I think the win is in the slowing down. Who can slow down? Who can use this most purposefully? And those people, will win the race and everyone else will go out of business faster because AI helped you do that.

Tim Hughes [00:40:47]:

And if I can be dystopian for a second, I don't know if you saw that, it's not what, 2 weeks ago now that Anthropic made the announcement that when they tried to switch it off, the AI decided to do things. So it actually said it would kill somebody. Rather than be switched off. And the other thing was it actually— there was— because you've given AI access to your email, AI knows what you're doing. So there was somebody that was having an affair. So they actually started black— the AI started— this is true story. This is from Anthropic. I've written about it, and you'll find it on the internet.

Tim Hughes [00:41:31]:

So that the AI started blackmailing the person that was having the affair to tell the person not to switch them off. That's— and that is today. Well, it's 2 weeks ago, so it's worse than that now. That's me just being dystopian. Sorry. We've got 15 minutes.

Tracy Borreson [00:41:54]:

I don't want to be a dystopian future.

Tim Hughes [00:41:56]:

We've got 15 minutes to take the negative activity and make this podcast funny and insightful in the next 15 minutes.

Adam Gray [00:42:07]:

I think that we've already covered off that bit, haven't we? You know, AI is here to stay. There is a window of opportunity for organizations to think about how they can use it in a really smart way. If you use it to replace you, you shouldn't be surprised when it replaces you and you lose your job. If you use it to augment what it is that you do, then you will appear more visible, more attractive, more able to do the things that you need to do. But your point, Tracey, you know, you've got to decide what do you actually want this to do. And, you know, I think that because it can do, in quotes, anything, it's a bit like if you speak a second language and someone says to you, oh, go and say something in French, and you don't know what to say because you're not given any guidance. So you're presented with a box that you enter something into for the AI to execute a task, and you're just doing it for the sake of doing something.

Tim Hughes [00:43:13]:

And Bertrand can say stuff in French that would—

Adam Gray [00:43:16]:

Oh, that's true.

Tim Hughes [00:43:17]:

Amaze you.

Adam Gray [00:43:20]:

But, you know, I think that there's a really important thing here that we do need to think, what am I trying to do by using this? If it literally is just putting comments on someone's posts on LinkedIn, I'm probably better off or possibly better off doing that myself. If it's about writing posts, okay, so the only differentiator I have is me. So I need to be part of that process. Even if I'm using the AI to help me, I still need to be part of that process because if I'm just switching on the AI and getting that to do it, I can just go and live on my desert island and get AI to do all of my work for me.

Tracy Borreson [00:43:58]:

Yeah, can AI feed my kid and come up with what we should have for dinner? Actually, we did use AI the other day because we didn't want to decide what to have for dinner and I was like, why don't you take some of the things we have in the pantry and in the freezer and ask AI what it would do with it. It actually came up with a really good idea. So if you hate meal planning, you could try, try that. I do want to just like, because I have some more chats in, in my feed that I want to share. Wolfram is sharing, I gently say that LLM is less a tool and more a conversational partner. If we treat it simply as a tool, then we use the value of exchange, communication, co-creation, etc., which I think is very important. And I think people need to remember, like, what it feels like as a human to create a thing. There's, there's a feeling that comes with that.

Tracy Borreson [00:44:50]:

And when you, like, fully outsource that to another human or to a machine, you don't get that feeling anymore. So this also ties into, like, the mental health conversation that are all over the place right now is that we're taking all of our joyous activities and we're outsourcing it to AI. No wonder we feel miserable. That's a terrible plan. I could rant on that, but I'm not. I'm going to go back to something Andrew says here. Would it not be best to use AI to review the company's procedures and how to improve them with modern techniques and not stick to the same old ways and then train people correctly? I think that's another great opportunity, but I also think that it takes, uh, ability to check your ego and say like, oh, we've been doing cold calling for 20 years and it doesn't work anymore, and AI is saying cold calling doesn't work, so maybe we should try something else. We have to be willing to accept those ideas, or at least test those ideas, right?

Tim Hughes [00:45:53]:

Like, again, hey, it worked okay during when I was, when I was a young boy Worked perfectly. It worked perfectly. You could ring anybody up and have a conversation. You know, you're just not trying hard enough.

Tracy Borreson [00:46:09]:

One more, one more. Is it? Yeah. Is that, is that the right one? Yeah. Um, John Ruskin said, if you only do what you've always done, you'll only get what you've already got. The way we use AI means we can often do what we've always done to get what we already have faster. But if what you already have is not what you are happy with, then why would you want to get what you already have faster? It's not actually— I think sometimes people think it's a stepping point into getting the next thing. If I get what I have faster, then I like create an opportunity to get the thing I don't have faster. But it doesn't actually work like that.

Tim Hughes [00:46:49]:

That's a really great point.

Tracy Borreson [00:46:51]:

Really good comments.

Bertrand Godillot [00:46:51]:

Yeah, going back to feeling, because I think that was, you know, that just gets me going on culture. How do you create— how do you— how can you expect to get culture from— I mean, I'm talking about companies. Of course. So you've got company culture.

Tim Hughes [00:47:19]:

You just go to ChatGPT and say, give me a 20-page document for culture for my company. You print it out, you put it on the back of the toilet door, and that's it. Done.

Bertrand Godillot [00:47:33]:

Okay.

Tracy Borreson [00:47:34]:

Yeah, that's what we've treated it like for a long time.

Bertrand Godillot [00:47:39]:

I'm afraid you're not going to get me on board this one.

Tracy Borreson [00:47:44]:

That's the sarcasm one.

Tim Hughes [00:47:46]:

It's the sarcasm face.

Tracy Borreson [00:47:48]:

But here's the thing, and I've said this lots of times, I believe that AI is an amplifier, right? And so like, if we give our employees free rein to use the AI of their choice to do the things that they think they should do, there's some really interesting information in that? Okay, like, are people covering their butts because they think they're going to get blamed for stuff? Like, what are people actually using the AI for? I would actually say that that's more so an indicator of current culture than a, like, creator of potential culture. But also, like, this is the thing I find interesting for me. Like, I, as a human, I believe in, like, the uniqueness of people and the value of unique resources coming together and like all of those things. And when I create content with AI or I use it as part of a research project, I bring that underlying way of being to the AI and it can play on that. And so it can support good culture, but it can equally support bad culture. And I think that again, the opportunity is in Can you take the feedback and say, wow, that's what the AI is showing is the standard behavior in our organization, and this is not the behavior that is going to get us to winning in our industry.

Tim Hughes [00:49:21]:

You could have a start, stop, continue review within your organization, which is a a brainstorm where you sit and you talk about what you should start, what you should stop, and what you should continue. Now, should you bring the AI to that? I mean, you could put the whole of your HR documents into it and say, what should we start? What should we stop? And what should we continue? I'm not sure it'll come up with the same ideas because at the end of the day, as we know, as we've been talking about, there's a lot of stuff in here that is not on the internet, right? And therefore there's a lot in here of, so why is it that we've got trucks coming to the front of the factory and there's this big long queue and we're spending loads of money in them just sitting there with the, you know, and they're burning a load of fuel? You know, maybe that actually there's people, those drivers would actually know a solution that would, you know, that would get rid of that. Rather than us putting it into AI and thinking, how can we actually stop this queue of trucks appearing at the front of the factory?

Tracy Borreson [00:50:31]:

Well, and then I think this is the interesting challenge because I choose to believe that AI can equally enable good stuff as it can enable bad stuff. And so, but this is where you get curious and the curiosity depends on your own, neat curiosity. But my curiosity is like, how can we— if we know that most of the answers to those types of things are existing inside of a brain of our employee, and the problem is that we can't get it out, right? Like, how can we get it out, right? Like, how do we turn our organization into an innovation culture? Are there things that we could do with. But it's really not as easy as just saying, I'll give everyone a survey and then I'll collect everything in their brain on a screen. You guys, no one trusts you enough to give you that information. That's the problem, right? So if the problem is that your employees don't trust you enough, can we use AI and other tools to recreate a culture of trust? What does that mean for leadership? What does that mean? I mean, like, the humans have to There's a very important human in the loop in that scenario. But, and again, the desire has to come from a human. The desire can't— doesn't come from AI.

Tracy Borreson [00:51:50]:

But if you have the desire as a human, I think there's cool opportunities to use it, even brainstorm, right? Like, could the AI come up with an idea for you to brainstorm how a leader might bring Start, Stop, Keep into a conversation? Maybe, right? And then you have a small amount of leaders who actually do it. And then you have some data associated with that. And then we have some data that says like, hey, the teams that are doing Start, Stop, Keep are actually outperforming the teams that aren't doing Start, Stop, Keep. Okay, now we're going to make that mandatory from a— and now we're actually going to train our leaders on how to do that in a trustworthy way instead of just saying you have to do it, right? But like, it's gotta be a holistic desire, because it's going to take effort. It's going to take human effort to make any kind of change in that arena.

Bertrand Godillot [00:52:43]:

And this gets into governance and ethics and things that we have not addressed today, but we potentially could do that another day, because this has been great.

Tim Hughes [00:52:54]:

I've enjoyed the discussion, yeah, and some great questions.

Adam Gray [00:53:02]:

Exactly.

Tim Hughes [00:53:02]:

Sorry, it's been a nice debate.

Bertrand Godillot [00:53:04]:

Yeah, it's, it's been a very nice debate. We do have a newsletter, so if you want to know more about the show, know what is coming up next week, just flash up the— flash the QR code on screen or visit us at digitaldownload.live/newsletter. With that, thank you to our audience, thanks everyone around the set, and see you next week. Thank you, bye-bye.

#AIDifferentiation #AICommoditization #PrivateData #DigitalDownloadPodcast #HumanInTheLoop #SocialSelling #DigitalSelling #SocialEnablement #LinkedInLive #Podcast

blog author image

DigitalDownload.live

The Digital Download is the longest running weekly business talk show on LinkedIn Live. We broadcast weekly on Fridays at 14:00 GMT/ 09:00 EST. Join us each week as we discuss the topics of the day related to digital transformation, change management, and general business items of interest. We strive to make The Digital Download an interactive experience. Audience participation is highly encouraged!

Back to Blog