Previous Shows

The Digital Download

Stop Using Prompts: Why Context is the Missing Link in Your AI Strategy

February 13, 202646 min read

This week on The Digital Download, we are debunking the biggest myth in Generative AI: that the secret lies in the perfect prompt.

It doesn’t.

If you are frustrated that your AI content sounds like a robot or "brand doublespeak," it isn't because your prompt was bad. It’s because your context was empty.

I am joined by Tim Hughes, Adam Gray, Tracy Borreson, and Richard Jones to discuss why "Prompts are processes, but Context is what tells you which process to trigger."

We will dive into:

* The Prompt Myth: Why the industry obsession with "Prompt Engineering" is a distraction from the real work of "Context Building."

* The "New Intern" Problem: Why treating AI like a magic button fails, and why you need to onboard it like a human teammate with a clear job description.

* The Beige Trap: How to stop your AI from generating generic "brand doublespeak" and force it to speak with your unique brand voice.

* Building Your Second Brain: How to create a "Single Source of Truth" that houses your business context, preventing hallucinations and ensuring consistency.

* The BYOAI Revolution: Why the future isn't about the company's AI, but about "Bringing Your Own AI" to work—and the governance nightmare that follows.

Join us to learn why you don't need a better prompt library—you need a better onboarding process for your digital teammates.

We strive to make The Digital Download an interactive experience. Bring your questions. Bring your insights. Audience participation is keenly encouraged!

This week's Host was -

Panelists included -

Transcript of The Digital Download 2026-02-13

Bertrand Godillot [00:00:09]:

Good afternoon, good morning, and good day wherever you may be joining us from. Welcome to another edition of the Digital Download, the longest-running weekly business talk show on LinkedIn Live, now globally syndicated on tuning radio through IBGR, the world's number one business talk news and strategy radio network. Today on the Digital Download, we're debunking the biggest myth in generative AI: that the secret lies in the perfect prompt. It doesn't. If you are frustrated that your AI content sounds like a robot or brand doublespeak, it isn't because your prompt was bad. It's because your context was empty. In today's panel, we will explore the concept that prompts our processes, but context is what fuels great outcomes. But before we get— before we kick off the discussion, let's go around the set and introduce everyone.

Bertrand Godillot [00:01:09]:

While we're doing this, why don't you in the audience reach out to a friend, ping them, and have them join us? We strive to make the digital download an attractive experience And audience participation, as you well know, is highly encouraged. Tracy, would you like to kick us off, please?

Tracy Borreson [00:01:27]:

Yes, of course. Thank you, Bertrand. Good morning, everybody. Tracy Borreson, founder of TLB Coaching and Events. We're all about authenticity, and I'm really excited to talk about authenticity in terms of context and prompts because it's totally relevant. I'm a super proud partner of DLA Ignite and always a fan of being in this conversation. I feel like it's one of those days where, like, people are just gonna say stuff. There are lots of jokes backstage before we went live, so we're gonna see what happens.

Bertrand Godillot [00:02:00]:

Excellent, thank you, Tracy. Tim?

Tim Hughes [00:02:03]:

Oh, thank you. Welcome, everybody. My name is Tim Hughes. I'm the CEO and co-founder of DLA Ignite. I'm famous for writing the book Social Selling: Techniques to Influence Buyers and Changemakers.

Bertrand Godillot [00:02:15]:

Very famous indeed. Adam?

Adam Gray [00:02:18]:

Hi, I'm Alan Gray. I'm Tim's partner and co-founder of DLA— Tim's business partner.

Tim Hughes [00:02:23]:

Business partner, thank you.

Adam Gray [00:02:24]:

Thank you.

Bertrand Godillot [00:02:26]:

There we go, there we go.

Adam Gray [00:02:27]:

And I'm a very proud DLA Ignite person. Thing is, you're not the only one that's proud, you see, Tracy.

Tracy Borreson [00:02:34]:

I said it, super proud.

Adam Gray [00:02:36]:

Oh yeah, you did.

Bertrand Godillot [00:02:37]:

Yeah, that's one stage upper.

Tracy Borreson [00:02:42]:

Richard, let's see what Richard says today.

Richard Jones [00:02:46]:

Hi, Richard Jones here from Curate. I'm an immensely proud partner of DLA Piper tonight, and yeah, really looking forward to this afternoon. As Tracey suggests, there's a lot of fun backstage, so hopefully some of that will transport itself into this afternoon's event.

Bertrand Godillot [00:03:08]:

Thank you, Richard. And myself, Bertrand Godillot, I am the founder and managing partner of Odysseus and Co. And should I say, probably outstandingly proud that DLA Piper is my partner. So now the landscape is set. As you can see, we're on Friday and we're Friday, sorry. And there's quite a Friday atmosphere around the set. So great stuff. So ladies and gentlemen, may I start with the foundational question? And here we go.

Bertrand Godillot [00:03:40]:

Why the industry obsession with prompt engineering is a distraction from the real work of context building, do you think? Who wants to take this one?

Tracy Borreson [00:03:55]:

I can start. I always say, so I think part of the problem is the natural human desire for things to be easy. And prompts are words that you feed into a machine. So it's theoretically easy to just get a list of quote unquote good prompts, feed them into a machine, and get output that you can use. Um, similar to any other template. And I come from marketing, so we use all the templates, right? Email templates, website templates. It's just like the world is templated to make it easy for people. And prompts fell straight into that category immediately.

Tracy Borreson [00:04:41]:

So it was like, get the 48 best prompts to blah, blah, blah, blah. And people are like, oh yeah, great. I don't have to, I don't have to think. I can just put in these prompts. And I've seen people use them to like create entire marketing programs. It's nuts, honestly, because exactly as you framed it at the beginning, Bertrand, they don't have context. Most of the prompts don't have context at all, and definitely they don't have your context. So unless you— I would say it's fine to start with the template, but unless you have influenced your context into it, you're gonna get a templated answer from a templated prompt.

Tim Hughes [00:05:24]:

From the, from the question that we've just had from LinkedIn user Bertrand, I think we need to define what a prompt is.

Bertrand Godillot [00:05:34]:

Guidance.

Tracy Borreson [00:05:35]:

Yeah, the question is, when you say prompts, are you referring to scraping? I don't think we are.

Bertrand Godillot [00:05:43]:

No, no, no, we are not referring to that. I don't think so. We're referring really to guidance that you would give an AI.

Tim Hughes [00:05:51]:

So a prompt is when you go to ChatGPT, other AIs are available, you type in and you give it instructions. Create me a picture of Tim Hughes surrounded by his gramophone, whatever, and that's the prompt.

Adam Gray [00:06:11]:

So it's an instruction, isn't it?

Bertrand Godillot [00:06:14]:

Question from Sam who says, is this a false dichotomy? I suppose, Sam, that you're probably pointing out the fact that we are separating prompts and context, and that might be irrelevant. I could not agree more, and I think around the set it's probably the same.

Tim Hughes [00:06:34]:

I'm going to go and look up what that means.

Bertrand Godillot [00:06:39]:

With that said, I'm going to leave.

Tim Hughes [00:06:41]:

A comment saying, what does false dichotomy mean?

Bertrand Godillot [00:06:44]:

Yeah, maybe, maybe we can, we can wait for Sam to come back to us on this. Although obviously Let's go back to the world of templates. I think there's always been best practices anyway. So we're not throwing the prompts and the best practice in prompting through the window at all, basically, because I suspect if you don't have the right process, then you're probably going nowhere as well. But then the question is the level of effort, you know, how do you split your level of effort? Between having the best.

Adam Gray [00:07:23]:

Process in place.

Bertrand Godillot [00:07:25]:

But not really anything to fuel them.

Adam Gray [00:07:28]:

So I think a few years ago, I say a few, there haven't been many years of AI, but a few years ago people spoke about how prompt engineering was going to be the next big career opportunity for people. And at the point that that kind of comment was made and it was it was quite trendy to look into prompt engineering in some depth. It was at a point when you couldn't really have a conversation with AI. So like, you would turn it on and it would know loads of stuff, but it wouldn't know anything about you. So that you would have to engineer a prompt that would give it context and instructions and a required output and all of that stuff. Fast forward to today, you know, you can switch on Gemini. Someone said about Gemini in the chat. You can switch on Gemini or any other AI, and you can basically have a conversation with it.

Adam Gray [00:08:28]:

And I think that— I always think a good starting point is, based on what you know about me, what do you think I do? And the AI will say, well, I don't know anything about you. Okay, so that's the first issue. You have to teach it about yourself. And a great way to do that, if you are like us and many of our viewers, if you are a prolific blogger, then there's loads of stuff that you can pour into it that'll teach it what you know, what your viewpoint on stuff is, how you speak, all of those things that create a degree of, of, uh, you-ness about it. And you can augment that with information about the industries, reports, so that AI becomes very knowledgeable, knowledgeable about your sector. But then you still have to give it really clear instructions. So you can say to it, write me a blog about why people say these things. And it won't be very good.

Adam Gray [00:09:31]:

Write me a blog about why people say these things and focus on this viewpoint and use these words and draw out examples that illustrate A, B, and C, then you're likely to get something which is much better and much more accurate. And I think that your point about being, you know, people wanting the easy option is exactly that. I think that the majority of times when you read some AI-generated content, somebody has gone on and said, write me a 500-word blog about blah. And then not surprisingly, noise comes out. You know, the AI knows a little bit about what you're asking it to write about, but it doesn't understand what point you're trying to make, what your experience is, what specialisms you have, what your worldview is, or any of these things that help it become you.

Tim Hughes [00:10:21]:

Just to add some context also, because I know that Bertrand is very modest, person is one of the things that we do. One of the things that we do with the digital download, part of it is to educate. So from the questions that we're getting, it's clearly clear that there's a big expanse from someone who doesn't know what a prompt is to someone who's actually like Bill MacLean, who's clearly quite advanced in the world of AI. So what we're trying to do, Sam, is actually trying to break things down for people so they can, they can understand. Also, what the point about Bertrand is, is that Bertrand is actually doing a whole big implementation of AI at the moment and has learned a whole load of things which he's sharing with people, and we're sharing through the debate and the conversation. And the summary of what he's learned is you can write a whole bunch of prompts, as Tracey has pointed out, but actually you need to have the context and I think the view, the worldview that we're seeing right now, and, you know, maybe there will be one or two people out there that are different to this because there always is, is that there's a lot of people out there selling prompts. There's a lot of people giving away prompts for free, and people aren't getting a response from it. And what we're pointing out is that you need to have context and data.

Tim Hughes [00:11:48]:

So and define that. So your ICP, your case studies, your websites, all of those things to use as context as a way to get the best thing from AI.

Bertrand Godillot [00:12:02]:

Which is exactly what Sam is stating here. Context engineering, this is what we can talk about, is definitely the right counterpart, you would say, to prompt engineering.

Adam Gray [00:12:19]:

Just before we do, I've just gone on to one of the broadcast channels on LinkedIn and we've got quite a few comments that are coming in that are not showing up in our comments tab at the moment. So we have from Greg Walters, we have just a greetings all, but from Nilsson Ivano, we have a comment that says, what has been your biggest challenge when training AI to understand your context?

Bertrand Godillot [00:12:47]:

Good question.

Adam Gray [00:12:49]:

Very good question. Yeah, so I thought I would flag that.

Bertrand Godillot [00:12:51]:

Sorry, Bertrand, who has an experience on that?

Adam Gray [00:12:57]:

You?

Tracy Borreson [00:12:57]:

Well, before I really want to hear what Bertrand has to say about this because he's in it right now. But one of the things that came up for me in the earlier part of the conversation too was this context of training and coaching. You're coaching the AI, you have to help it. And I think it's also in that process, that's a process, right? I have to put effort into coaching a thing, or even what Bill said earlier about making sure that you have the right datasets that you're mining stuff from the knowledge base, right? Like, you, the effort has to go into those things. It's not just like effortless. There's like an effortfulness that's required to get like the contextual stuff in there. So you can get contextual stuff out. And if we don't have skills or patience for training and coaching, then probably not going to get the most out of AI, whether it's for content or for anything else.

Bertrand Godillot [00:13:59]:

Yeah, when you— and when you think about it, it's not really different from, you know, having a new teammate on board. You know, when you get a new teammate, Well, you can, you can give, you can explain the processes, the tools, etc., which is what most companies do, by the way, at least large-scale companies do, rather than giving, giving them context, which is basically, you know, who are our target customers, what are our value propositions, how do we articulate these value propositions. Do we have competitors? Who are competitors? How do they sound different than we are? What is our preferred way of engaging? All of this stuff, which are actually, which make the difference and much more different than just learning CRM system. This is how it works. This is our funnel. These are the various steps in the funnel, which is also interesting, of course. Us and, and is part of the overall onboarding process, but should necessarily stick to that, right? A good onboarding process is about culture as well. So that's probably where we, we can drive this.

Bertrand Godillot [00:15:28]:

And to go back to the question, which was, uh, what's been your biggest.

Tim Hughes [00:15:33]:

Challenge when training AI? Well, that's— well.

Bertrand Godillot [00:15:36]:

That's part of the challenge. Part of the challenge is, I think there are two things there. There's you and there's your company, right? So, well, so everything about your company is pretty easy, I would say, together in terms of what are your products and services, your company, your target customers, etc. Then there's always a missing piece, which is you. How do you write? What's your voice, etc.? And that's probably the most challenging piece. Especially because there's this kind of fantasy that, you know, why— back to the easy button, Tracy— why don't we get a bot to write for us? That's a big fantasy, I would say, or at least was. Is less and less, but still, uh, it sounds to me like something that is really unachievable. Not necessarily because it's technically unachievable, but it is, I don't think it is, it does make sense because I do believe that we make a difference as individuals, especially when we write content.

Bertrand Godillot [00:16:49]:

So a long, a long answer for a short question, but definitely the core of the topic.

Tim Hughes [00:16:56]:

So, so Bertrand, As with individuals, do you set the AI objectives?

Bertrand Godillot [00:17:06]:

Well, that's part of your guidance. I would say you would, as part of the onboarding process, if you think HR, you know, you will give an initialization basically. So an initial, I would say, both culture and skills transfer and culture and processes transfer. And then you'll set, you'll set short objectives, which is basically guidance. But then the important thing as well, I think, is all the reviews. So, you know, are you performing on track? What is it that you miss? Are you missing anything to increase this performance? Do you have too much? Am I a good coach? All of these questions are actually very relevant to our discussion and to improving the way your various— whether these are GPTs or GEMS are actually performing.

Tim Hughes [00:18:22]:

So this, let's call it ecosystem of reference, is it a document or a database or is it something else?

Bertrand Godillot [00:18:32]:

It doesn't really matter, to be honest, and let's not get too technical, but that's, it could perfectly be a document. I think so.

Tim Hughes [00:18:45]:

It could be, so for example, it could be a document which is a set of keywords.

Bertrand Godillot [00:18:53]:

It could be a set of keywords, it could be a URL, it could be a description, it could be your original content if we're talking content, it could be your convictions. I think that's— and when you mean.

Tim Hughes [00:19:07]:

Convictions, you don't mean how many, um, parking, um, fines I've got.

Richard Jones [00:19:15]:

It depends what question you're asking.

Tracy Borreson [00:19:17]:

No, I— and what you're trying to write about.

Tim Hughes [00:19:22]:

What do you mean by convictions? Because otherwise we're going to get someone saying, well, it's not picked up my convictions.

Bertrand Godillot [00:19:29]:

So the things that you are— sorry, Richard.

Richard Jones [00:19:32]:

Beliefs, isn't it, Bertrand? I would say your beliefs, your principles.

Bertrand Godillot [00:19:36]:

Yes, your beliefs, your principles, your values, things that, you know, set you apart from others, especially if you are in a really mainstream market. You may have some strong points of differentiation, and because there are lots of assumptions made by your by your teammates, you probably need to reinforce some of these, at least give them your key differences, your key values, your key, what I call the convictions, but that's a French word.

Adam Gray [00:20:24]:

No, it works in English as well. But I do think that the part of, um, it, it's how we frame AI in our own minds, isn't it? You know, it's, it's like an intern that's very clever and it doesn't really know very much. It certainly doesn't know any protocols. And I think what's really interesting is when we see Grok or whatever as a piece of AI and it's left to its own devices, it ends up somewhere that you don't want it to be. So I think it's absolutely crucial that we give it, as you said, Bertrand, those those pointers and markers to understand, you know, this is acceptable and this isn't, this is what I want you to talk about, this is what I don't, this is the language I want you to use and this isn't. And that's absolutely crucial if this tool is going to go out there and represent you in the marketplace, because it's very easy to destroy a reputation by posting something that you didn't mean to post. And, you know, history is littered with people that have made mistakes like that. So I think that part of this is about trying to— and we had a comment in the— something in the comments about, are you familiar with the Texas Instruments home computer from the 1980s? And I remember Douglas Adams in one of his books said about how computers are like children and you have to tell them— and part of the thinking process is around breaking down very complex tasks into a series of very small tasks in order that the computer can execute them.

Adam Gray [00:21:54]:

And it's kind of what you need to do with AI. You can't make any assumptions about what it knows and what its viewpoint on things. So you have to be really clear and unambiguous about those, don't you?

Tracy Borreson [00:22:06]:

My 7-year-old has a school assignment right now on computer science, and they have to write code for how to make a peanut butter and jelly sandwich. And then the moms and dads or whoever has to test the code so they can debug it. And it's like hilarious because they're just like, put peanut butter on a sandwich. And so you like take the jar of peanut butter and put it on a sandwich, right? Like, this is how, this is how literally computers and prompts are taking our words. And this is why it's also like important to have that full context. And just until you go through an exercise like that and you have somebody like My husband and I have tried to be particularly funny about it, but like, how could we interpret this in not the correct way? Like, we don't test for a lot of that when we are prompted or while we're like going about our regular day, right? So if we're not doing that in our regular day, we might not be thinking about it when we're writing a prompt and really have to think about those types of things when we're writing a prompt. And I want to go back to Greg had a really good comment about— it's way back there now— about the biggest challenge in training the AI, if we want to go to that. But I also just want to, because I know Greg has a lot of experience in AI, and he was— he said beware, you're thinking in terms of the flat database.

Tracy Borreson [00:23:30]:

And so like, I would love, Greg, for you to share more information on how to not think like a flat database, um, in AI, because I think that'd be super useful. To the conversation. But yeah, Greg's comment here about the biggest challenge in training the AI is secondary coaching yourself on how to change for search and retrieve to ask for an answer, not sources mode. And yes, you are training a new hire who knows nothing about you and your business. So let's keep that in mind, shall we?

Bertrand Godillot [00:24:03]:

Yes, exactly. All right, so other questions?

Adam Gray [00:24:11]:

Well, I think we've got a great comment that's just come in. I feel like it's an evolution of Googling. Not all Googlers are created equal. And I think that's a really good point, isn't it? Because previously, I know that when I had used Google previously, I had searched for something and Google had presented me with countless options that it thought solved the problem that I had, and then it would be up to me to sift those problems. And now, even if you are searching within Google— I know, Tim, you do a lot of searching within ChatGPT or Gemini to get answers to things— but even if you're still using Google, AI is the thing that comes at the top, isn't it? So it's becoming a tool to answer questions rather than give you sources. And I think it is fundamentally changing how we look for things. And I guess in some ways it's quite empowering, isn't it? Because previously you would have Googled something and you would have got 1,000 different answers and you'd go, well, actually, I'm no further forward now. I know 1,000 companies profess to answer the question that I have, but I don't know— I don't know enough about it to know which are good and which are bad.

Adam Gray [00:25:23]:

Whereas now you search for it and AI says, here are the top 3 that you need to know, or the top 5 that you need to know. And that's a very different world that we live in, isn't it?

Richard Jones [00:25:33]:

I think there's a certain element of starting with why, to steal the name of Simon Sinek's book. I think all too often people are kind of sort of putting in a what they want rather than actually why they might need it. And I tend to find that I get a much better answer if I create— it's just a The way the context comes in, the way the other— the question is, is posed, you know, being a little bit more around why you need this information. And yeah, it's a, it's a, it's a very, very simple kind of sort of rule of thumb to use before you start, you know, digging around, because otherwise you will just get lots of whats.

Adam Gray [00:26:16]:

Yeah, but I mean, I think when, when we're using AI to actually generate some output for us to use. I guess the question is, how much priming do you need to give the AI to get an answer that is acceptable? Because clearly, you type in, write me a blog about blah, which is no effort in giving the AI context, as you would say, Bertrand, and no effort in even writing a decent prompt. And what you get out is just more noise and stuff that makes it more difficult for people to find what they're looking for, or you have intelligent companies. So how much is enough?

Tracy Borreson [00:27:05]:

I feel like because one of the things I use— so I use ChatGPT, that's my preferred AI. And it's interesting, I love saying like, use what you know about me through our chats to like answer this question. And sometimes it's really good and sometimes it's really whack, right? Which shows you the gaps in the context. So I love exercises like that to help you understand what context it has and what context it doesn't. But I'm a big fan of feeding it as as much as possible. So one of the things I started doing was when I have transcripts from calls is loading it because I'm like, this is context about how I talk about things, how I connect dots and these types of things. And it's been really interesting to hear because like, again, sometimes it does surprisingly interesting things and you're like, oh, this is like in their conversation with blah blah blah. And I was like, oh, there, yeah, you're right.

Tracy Borreson [00:28:11]:

And that's one of the really cool powers of AI too, is it has all of that like cross-referencing speed power that a human brain doesn't necessarily have. And so being able to load like different contexts of things, right? Load your writing, load your conversations, load a podcast transcript, right? Like load different things because it shows A human, the way a human shows up is diverse, right? Like, none of us show up exactly the same all the time, yet we show up with the same, like, underlying brand. And so, like, the more breadth and depth of context you can give it, like, hey man, it's got lots of processing power, give it as much as you can.

Richard Jones [00:28:56]:

I find it builds up over time, and it's uncanny when it starts to bring stuff puts it into context that I've used in the past, you know. And I'm thinking, hey, this is actually ahead of the game here because, you know, it's picked up on what I've asked in the past and it's kind of sort of brought it to, you know, to bear now. So it's a— it's— I don't think you can always create— you know, context is built up over time. It's like knowledge, isn't it? It's not an— it's not a quick win.

Tracy Borreson [00:29:32]:

Yeah, and I think that's the interesting thing to remember about context, and maybe that is what Greg was referring to in terms of a flat database, is that like context can grow all the time, right? So we come and we have this conversation today, and then someone says a thing, and then it triggers some kind of buried deep knowledge in our head, and now that's become explicit, and now that can become context. And this is actually, as I say that, the really interesting thing about even thinking about feeding context to AI or to a trainee or anything else is that it has to be at that level of explicit processing. Like you have to be like very clear about it. And a lot of times really important context is underneath the layers. So again, being able to include as many things that make those intangible things tangible allows you to create ongoing greater context.

Bertrand Godillot [00:30:34]:

That's a very good point you're making, Tracy, on this because I'll take it slightly differently. This is not a— your context setting is not a one-time-off, right? Because when you think about it, if you just do a dump every night, if we just dump today everything we've created, as content, for instance, then there will be— so make no mistake, there will be no new IDs, right? So it is your brand dump as of today, but we're changing. So, well, this is— that's the good news. We are— our thoughts are changing, our priorities are changing, our areas of focus are changing. Sometimes we also improve ourselves, and therefore there is the need to not do that just once, but regularly update your context so that your teammate can follow. Because otherwise, you know, it will be back into, you know, it will stuck at, you know, at the initial, at the initialization point. Basically. So I think that's also quite interesting because this idea of, you know, it's probably, you know, going back to what makes us different, because I think that's really the question behind anything you can do with GenAI is, is it something that is this task, am I better at, am I making a difference on this task? 'Cause if I don't, then I can delegate it.

Bertrand Godillot [00:32:21]:

Delegation doesn't prevent control, by the way. More, that's the French way of saying it, but goes along with control is a more, is probably a better way to phrase it. But yeah, it's where am I making a difference? And if I'm making a difference, that's what I should keep. I think it's really, you know, after probably a little bit more than 2 years trying to play with this stuff, it's really where I came down to is, am I making a difference on that task? Yes, I am. Therefore, this is for me. So where am I not making a difference? Then I can train someone to make it for me, to do that task for me, and obviously control it. Okay.

Adam Gray [00:33:12]:

Isn't there an issue there, though, that what you're doing is sustainable behavior? You're looking at your workload, and you're saying these things here have to be me. These things here do not have to be me, but they have to be steered by me. Okay, so that's great. But in my experience, the vast majority of people don't do that. What they do is say, I'm going to outsource all of this to my— to do nothing. Well, I think that's obviously many people's view of utopia, isn't it? That I'll outsource all of this and then I can sit watching films all day rather than actually doing any work. But my point was that the problem is that we are likely to see a similar thing to what we saw with email. So email came out.

Adam Gray [00:34:04]:

And it gave everybody the opportunity to communicate in real time with anybody else around the world, and that's great. It also gave you the opportunity to do it at zero cost, which is also great. So I need to speak to Bertrand, so I send him an email, it arrives instantly, he can read it, respond if necessary, and come back to me instantly for zero cost. That's a benefit. What isn't a benefit is where Tracy and Richard, both full-time marketers, say this is brilliant, there's zero cost for sending these things, so therefore I'm going to send 100 million emails, I'm I don't even have to bother personalizing them. I'll just pour the addresses in and press send, and that will generate me loads of business. And the problem then is that when everybody does that, it becomes increasingly difficult for Bertrand to find my email in the vast flood of emails that have come in. So we're likely to be looking at a similar kind of challenge with, with this, aren't we? With vast numbers of people that do the bare minimum to get an output that they can then share.

Adam Gray [00:35:02]:

Muddying the waters for everyone that's trying to do it right.

Richard Jones [00:35:05]:

There's a certain amount of what you— I mean, very much a case of you get out what you put in. So the more you put in, the more incremental value you should get out of it. If you put very little in, you're not going to get a great deal out of it. You might believe you're getting something out of it because you're automating a boring task, but you're not really putting it to its, you know, sort of using it to its best effect. You know, I think that's For me, it is a case of the harder I work at it, the more impressive the results seem to be.

Tracy Borreson [00:35:40]:

I think there's a level of caring that goes into that too. Actually, Bala had a comment about making a difference to what is negative difference. Okay, bringing it back to context again. There's so many things, and I think a lot of people are using AI and automation technologies to do things they don't want to do. Instead of asking themselves, why do I not want to do this? Because a lot of times, like, I do a show on Thursdays or on Wednesdays, like shameless plug, Crazy Stupid Marketing Show, where we talk about all of these things, right? Like that people do because they've been told to do them. So you do that, you post on social every day, you use AI to generate your content because it's faster. You like, people are just doing this because they heard it somewhere. Right? But if you really pressure test that and say, like, why am I doing this? Oh, because someone told me that I should.

Tracy Borreson [00:36:35]:

Okay, that's not a good reason. Roll that back. And so the technology, the technologies aren't at fault here. But like, humans, we have to try, we still have to try. And if we don't, then we're going to fill our days with continual stuff that doesn't— it's not meaningful for us. So if I go in, then like the way I use AI is very meaningful to me. It's super helpful. It makes me faster, right? Like everything I use it for, I can see that I can answer the making a difference question.

Tracy Borreson [00:37:10]:

Is making a difference here? But there's a lot of things that a lot of people, especially in business, and I will maybe say particularly in marketing, maybe sales as well, where people are doing stuff to do stuff. Stop it. It's like, you don't need to do that stuff faster if you didn't need to do it in the first place. So I think there is also this discernment that needs to come into the conversation of, do I need to do this at all? And if the answer is no, then just don't.

Adam Gray [00:37:43]:

But also, not only do I need to do this at all, but is this the right thing to do? And I don't mean like, is it ethically right? So I did a post today, I think, and Tim commented on it saying he just read something that someone else had posted saying you have to touch somebody 15 times before they notice you. The problem is if somebody touches me 15 times via email and telephone, they get blocked on every single platform. It doesn't take 15, it takes 5 times because I'm sick of, sick of it.. And I think most people out there, they would recognise that how they want to be approached by somebody that has a solution that can help them is often very, very different from how they choose to approach other people. And I think that asking if— how would I feel if I received this?— is a fundamental question to set the parameters for whether or not I think this is a good thing or a bad thing to be doing. And I think that often people, you know, it's like they hear what they want to hear, they believe what they want to believe. It's different because my solution is really good, means it doesn't need to abide by the same rules that everyone else in the world does. And until we kind of shake this off, AI will just be something that creates more noise for us, won't it? Or isn't there that risk?

Tracy Borreson [00:39:06]:

Well, and I mean, like, I just feel like I come back to the point about AI being the, like, it's an amplifier, right? It's an amplifier of what we put into it. So if what we're getting out of AI right now from a global perspective is noise, this is what we're putting into it. We think we need to create noise. Why do we think we need to create noise, right? Like, what is noise? Is noise good? Would I define noise as good? No. Like, it makes me think of an 8th grade band, right, where all the horns and things are squeaking and you're like, this does not sound good at all. But you can put all the same instruments into a symphony orchestra and it sounds phenomenal, right? So it doesn't have anything to do with the instruments, it has to do with how people are playing them. And if people are playing them for noise, that's what we're gonna get.

Richard Jones [00:39:58]:

What do we used to say? Empty vessels make most noise. And this is the ultimate amplifier of those empty vessels, isn't it?

Adam Gray [00:40:05]:

It is. I think best comment of the day, and possibly the year so far, has just come in from Greg Walters. Okay, try this. We're not working with data, we're working with experiences, memories. Crude but illustrative. The LLM in theory can retain results, deliverables, processes through patterns, not data. Absolutely.

Tracy Borreson [00:40:32]:

Brilliant. And also, if you're watching in a place where you can read the comments, Greg did answer my question earlier about the LLMs if you want to go back. And he also says the concept is deep and deserves more space than a comment. So you should probably connect with him and deep dive that question.

Bertrand Godillot [00:40:51]:

Yeah, I have to say, yeah, we have a very talkative friend on the chat. It's got nothing, nothing really of high value to say, but, uh, that, that's so, so that, that makes a lot of noise on our discussion.

Tracy Borreson [00:41:09]:

Yeah, uh, actually, can we, can we talk about that? Because context, right? We're here to talk, we're here to talk about context, right? Um, I, and I get this happens in a lot of places. I've had it happen on my show, right? Like where people just, they're just talking Right, this is talking and it's fine. It's not, it's not bad, it's not wrong, it's not any of that, but it's just like lacking context. So then people are like, meh. And this adds to the behavior of us thinking that we don't need the context, right? Like, because we see so much of this that we don't, like, we forget or just like context engine got turned off., right? But we need— like, this is what happens. And just like Adam was saying with our experience with email, right? Like, we experience these things and we read them and we're like, well, that's not contextual at all and completely irrelevant, right? And we do that, but we don't really notice that we're doing that. So then when we are doing that, we also don't notice that we're doing that. So one of the most powerful things I always say is that is to stop and notice.

Tracy Borreson [00:42:17]:

Notice what the impact— what does that behavior outside of you, what's the impact it has on you? And do you want to have that impact on other people? And if you don't, then don't do that. I mean, it seems pretty simple, but it does take slowing down. It takes slowing down every time we're trying to just like rush. And I think this also contributes to like, I'll just get a prompt and then I'll just do a thing. Because we're trying to rush through everything. I think we got to slow down. And when we slow down, we do the meaningful stuff, then AI can be very helpful. But if we're just rushing through it— because I was talking to a lady the other day and she was like, it's like if you're driving a car at 85 miles per hour and you want to turn, right? You have to slow down.

Tracy Borreson [00:43:04]:

You have to. Otherwise, roll your car. Even my 7-year-old knows that. But we're all rushing so fast to do all the things. Our businesses are going to fail if we're not rushing so quickly. No, none of that is true. In fact, you're going to roll your car if you keep going at that speed. So just like maybe slow down and take the corner at a meaningful speed.

Adam Gray [00:43:27]:

And I think also, you know, it's the easy button. Tim said that one of the people on one of the shows that he's on is a bot that plugs in and leaves kind of comments. And I think it's really interesting that people would see doing that as being something that would be of some kind of benefit rather than a problem for other people. And I think that social networks in particular, but the wider kind of digital communications world, they are platforms for people to come together and decide whether or not they want to do something and collaborate in some way, either client and supplier, or as partners, or as friends, or whatever it may be. And actually, when you start to take that out of the equation, you just create more noise, don't you? You create fewer opportunities for me to spot the person that I should be having a conversation with and for them to spot me. And that's, that's the big problem.

Tracy Borreson [00:44:38]:

Isn'T it? Yes. Also, I just noticed on my feed, which didn't come through here, but Andrew Schlesser is here waving. So normally we all wave back at him. Hi, Andrew. But the comment didn't come through to everybody. I don't know what's going on with.

Bertrand Godillot [00:44:59]:

Those comments. Where do we want to take this now? Any specific topics, any specific questions, areas you would like.

Tracy Borreson [00:45:11]:

To cover? I think the big thing coming up for me is just to keep context top of mind. Not even just in AI. Where are we learning things about ourselves? Are we learning things about ourselves through conversation? Are we learning things about ourselves through journaling? Are we learning things about ourselves in business meetings? Like, where do we feel like we're learning the things we need to know to like create this context that makes sense? Because as humans, we subconsciously collect that all the time. But now if we're taking that and sending it over to AI, We, let's not pretend that we have perfect self-awareness as humans, right? So unless we really have a good practice of paying attention to context, we can't feed the context to anything. So I think it's an important process to like collect.

Richard Jones [00:46:12]:

I mean, it's interesting this because I'm a bit of an old school salesperson. And when I first started out, you'd go and see your customers and and you'd get shown around whatever it was that they— the facility which it made whatever they made or did whatever they did. And you started to get a real context in terms of who you were dealing with and what their challenges were, and you sold accordingly. And we kind of seem to have moved away from that, and it's now just sort of showering people with stuff and assuming that they will sort of kind of figure it out and how it— figure it out you know, and how it's going to apply to whatever they do. And I think we've lost this sort of ability to engage with those we're trying to engage with. And that is all about context. And that, you know, is at the heart of why I think a lot of people are not as good as what, you know, selling has become a lot tougher as a result, because we're not aligning what we do to the needs of those we're trying to appeal to. And that comes you know, very much to the fore when you start bombarding people with AI-driven content that bears no relationship to what they're interested in.

Tim Hughes [00:47:21]:

What was it that Seth Godin said, Adam? It's not my job to sort out your marketing.

Adam Gray [00:47:27]:

Yeah, not my job to sort out your marketing problems. Who said that? Seth Godin. And he wasn't meaning him as a marketer. He was— the perspective for this was If he gets a load of emails through, it's like it's not his job to sift out which ones he should be reading. That's your job to make them attractive. So basically the whole lot will go in the bin.

Richard Jones [00:47:49]:

I did mishear you and thought you said Seb Coe, and I was trying.

Adam Gray [00:47:52]:

To figure out— That too.

Bertrand Godillot [00:47:58]:

That too. Excellent comment from Greg again. So yeah, I really enjoyed Greg says, yes, good salespeople, maybe even old school, should make the best prompters. Going back to what you were saying, Tracy, on, you know, what is it that I should be looking for? We do have a lot of challenges when we coach customers around having them talk about themselves. Which is also, which is definitely a starting point if you want to, let's say, increase the performance of your various teammates. So I think there is obviously a point around, you know, thinking about what is it that people need to know about you And therefore, do you.

Adam Gray [00:49:01]:

Know it?

Tracy Borreson [00:49:01]:

Well, and I think a lot of people think that there's a lot of unnecessary stuff, right? Especially when you're thinking about what you might feed into AI, right? From a like human behavior personality type of thing. But it's really interesting because so I was at the University of Calgary last night judging their marketing club's brand hackathon. On. And like, it's funny, it reminds me of being in business school, right? Like all the kids have their business suits on and they all have the same things in their slides because this is what they teach you in school. And I'm just like, wow, we really aren't teaching people about personal context at all, at least in business school. And maybe it's just not all business schools are created equal. Not that the U of C has a bad business school, it has a very good business school, but we just don't teach, we don't teach that, right? Because it's not part of the curriculum. And it's really interesting because this kind of ties back to what Greg was saying about the flat database, right? Like a curriculum is a flat database.

Tracy Borreson [00:50:06]:

It's this thing. We teach all the kids the same thing, right? Like, and this is what we get. And it was really interesting because the teams that ended up winning were the teams that had a unique perspective in the case, but they couldn't identify what their unique perspective was. So all the teams came to talk to me after, and they didn't— they wanted to know what, what, what was it about their presentation that stood out? Because they don't know. And so I think this is also— but it's very difficult to get that information out of yourself.

Bertrand Godillot [00:50:43]:

Right?

Tracy Borreson [00:50:43]:

And if it's just us sitting with AI input engine, right, then it's just us. This is why I like the transcripts because it shows what other people can get out of you. And so I think that's another— yeah, and you were talking about, Bertrand, about talking about yourself, right? Like, no, I don't want to go stand up on a— in a present with a slide deck and be like, here's all the things that you need to know about Tracy Porson. I feel like that's uncomfortable for anybody. And if that's what we think it is, then it's going to feel uncomfortable. But when you get into a vibe conversation with your people, right, like you say stuff about yourself, you reveal your curiosities, you do those types of things. And so again, I think it's influencing those other types of behaviors into the overall ecosystem of our context gathering to say, like, how is this coming out?

Bertrand Godillot [00:51:35]:

And it also works for, you know, I had a few experiences on that already. When you, you ask, you know, a group of sales, marketing, pre-sales people to talk about their, their ideal customer profile and their value propositions, and if you get the transcript of that, it's actually far easier to get to something a bit structural about their targets than it is to get them to talk about their targets directly. So I'm 100% with you on that, Tracy. That's, at least for me, a very, very relevant use case where I don't make a difference, by the way. I make a difference in raising questions, but not in basically crunching the outcome. We have a great comment from Robert, and I have to watch because we're about to be at the end of the show.

Tim Hughes [00:52:37]:

Shall I read that out, Bertrand? Yes, please. Okay, the opposite argument is that experience can create perceived truths and biases. I have worked on a contract where there were 3 people who told you the same truth, Normally hearing the same thing from 3 people gains acceptance. However, one person could be convincing the other 2. AI can actually test this truth. Humans can also hallucinate, especially if you.

Tracy Borreson [00:53:07]:

Don'T like the answer.

Adam Gray [00:53:09]:

Why?

Tracy Borreson [00:53:10]:

We're the ones who built AI. Why do we think AI can hallucinate? Because we can too.

Adam Gray [00:53:15]:

Thank you, Robert. That's good. Yeah, I mean, I think that's a really good point as it, that we can hallucinate as well if we don't like the answer. And I think that's the key thing, isn't it? You know, sometimes we'll turn these tools on, we'll know that what they're saying isn't actually what we want to say, but we'll say it nonetheless because.

Bertrand Godillot [00:53:37]:

It'S easy. All right, ladies and gentlemen, thank you so much. That was really good, but that's already time. So we're going to have to stop there. But if you want to know what's coming next, guess what? You can flash the QR code on screen or visit us at digitaldownload.live/newsletter and you'll get everything about this episode, but also the upcoming ones. So, you know, any questions will be answered on that. Hopefully. With that, I'd like to thank everyone for joining today.

Bertrand Godillot [00:54:19]:

Thank you, Tracy. Thank you, Richard, Tim, and Adam. And we will see you next week. Thank you so much. Bye-bye. Bye-bye.#GenerativeAI #DigitalTransformation #B2B #FutureOfWork #ContextIsKing #BusinessStrategy #LinkedInLive

blog author image

DigitalDownload.live

The Digital Download is the longest running weekly business talk show on LinkedIn Live. We broadcast weekly on Fridays at 14:00 GMT/ 09:00 EST. Join us each week as we discuss the topics of the day related to digital transformation, change management, and general business items of interest. We strive to make The Digital Download an interactive experience. Audience participation is highly encouraged!

Back to Blog