This week on The Digital Download, we're discussing the most overlooked vulnerability in any organization's security plan: its people. Our special guest is Giles O'Halloran, a Fractional HR Solutionist and strategic advisor with a unique background spanning HR, military intelligence, and AI technology.
In a world of AI, pervasive social media, and sophisticated cyber threats, technology is often seen as both the problem and the solution. However, the most significant risks often originate from human behavior. This episode will explore how to manage the "people risk" that can lead to reputational damage, data breaches, and a loss of trust.
Join us as we discuss questions like:
* Why do we need to treat people like adults and trust them to behave appropriately online?
* How can social media be used for recruitment without introducing bias or falling into legal traps?
* What are the risks of AI in the workplace, and how do we ensure it augments intelligence rather than creating new problems?
* How can an understanding of intelligence gathering techniques help protect a company from data aggregation risks on social media?
As a board advisor on the application of new AI technologies, a former RAF Intelligence Analyst, and a consultant to INTERPOL, Giles brings a rare perspective on how human factors impact security. He provides strategic guidance on mitigating people risk, helping leaders build frameworks that are reasonable, ethical, and effective in today's ever-changing world.
We strive to make The Digital Download an interactive experience. Bring your questions. Bring your insights. Audience participation is highly encouraged!
Giles O'Halloran, a Fractional HR Solutionist and strategic advisor with a unique background spanning HR
Adam Gray, Co-founder of a DLA Ignite
Tim Hughes, CEO & Co-founder of DLA Ignite,
Adam Gray, Co-founder of a DLA Ignite
Adam Gray [00:00:03]:
Bonjour. Episode de Digital Download. Oops, sorry. I was doing my best to be a, a Bertrand clone there who always says it in a foreign language. So back onto home turf. Hello everybody and welcome to another episode of the Digital Download, the longest running weekly business Talk show on LinkedIn Live, now officially syndicated or globally syndicated on Tunein Radio, the IBGR Business network, the world's number one news talk and strategy radio network. Today we're going to be discussing the most overlooked vulnerability in any organization's security plan, its people. And joining us today we have special guest star Giles o' Halloran, who I didn't put in the green room because there's only three of us here today and he's here to help us with the discussion.
Adam Gray [00:00:56]:
So before Giles speaks and we introduce him properly, let's introduce ourselves. Tim.
Tim Hughes [00:01:06]:
Yes, my name is Tim Hughes. I'm the CEO and co founder of DLA Ignite. I'm famous for writing the book Social Selling Techniques to Influence Buyers and Change Makers.
Adam Gray [00:01:16]:
I'm Adam Gray, I'm Tim's business partner and co founder of DLA Ignite and our long term friend and colleague at times, Giles o' Halloran is joining us today. Giles is a fractional HR solutionist and strategic advisor and he's got a really interesting background through hr, military intelligence and AI technology. So he's a great guy to have with us to help, help have a conversation around this. So Giles, why don't you first of all introduce yourself and talk a little bit about where you've come from and what you're doing.
Giles O'Halloran [00:01:52]:
Thanks very much both of you, as always to be invited back. So to add to what you've said, yes, I'm a fractional consult work full time and to be honest with you, I don't have a job. I do work discrete packets of work or projects, you know, to provide that consulting support. So that could be anything from developing and training HR professionals through to providing HR solutions through to coaching and being a confidant to HR leaders and also other executives as well as advising on the implementation of HR technology and career transition. So there's quite a portfolio of things all around people. One of the areas I do really sort of find fascinating and that does combine my military reservist and my HR background is this element of people risk and combined with an ongoing interest in technology. As you both know, from whether my social media days or going back to when I started my career in HR with IBM, that's always been a driving force behind me. So yeah, other than that I'm a fellow of the cipd.
Giles O'Halloran [00:02:48]:
I lead the Royal Navy's HR Professionalization program. I've been working over the last year quite a bit with UK Police, Home Office, etc and have also spent six years as a contract consultant to at times to Interpol. So quite an interesting and fun background in the work that I've done.
Adam Gray [00:03:06]:
Fun background, yeah, it does, it does sound like you're paying for quite high stakes when you do that, that kind of thing. So one of the things that, that we see, and Tim and I discussed this just the other day actually, is that there's lots of, there's lots of kind of cyber security type work going on in the, in the world and many of our past clients have been cyber security companies. But one of the biggest risks is the people within the organization. So given that you want people to be outward facing, you want them to be engaging with your brand, you want them to be having conversations with prospects, you want them to be out there in this broad digital landscape, what can you do to make sure they don't do silly things? Because you know, once upon a time it was clicking on links. But I guess that the, the risks with every iteration of digital and AI, the risks and the opportunities for you to get it catastrophically wrong increase. So talk us through, like what can go wrong and how do we stop it from going wrong?
Giles O'Halloran [00:04:15]:
Yeah, I've been in the situation as an HR director previously where we had a number of cases come up around social media risks in particular with what people are posting, to what extent the audience. And it's been sort of looking at how we sort of not just identify but measure and mitigate those risks effectively. And therefore we, you know, the starting point is, you know, have some sort of baseline now that can be a social media policy and that shouldn't be a punitive policy. It's about how you establish good practice and then providing guidance, training, learning around that to making sure people can use it confidently, comfortably and not feel that they're at risk. But if they do make a risk, you know, sorry, create a risk, then allow them that open discussion who they speak to, to what extent or why create that connectivity. I think that if you develop that kind of policy, you treat people like adults, they'll act like adults rather than keeping the trainer, keeping like children, the guidance be mindful of it's an HR mistake is if there's one error we make or the organization makes or one person makes, we suddenly have to create a policy. No, you don't create a policy For a minority, you create a policy to help the majority. There's always going to be risks, but it's how you measure and mitigate those risks as individual risks.
Giles O'Halloran [00:05:30]:
And therefore, another thing you can potentially consider is using across the organization, things like business contact guidelines, you know, getting people to sign up to how they'll behave, how they'll work, but share across the board in terms of what's expected. Coach and advise people, don't control. Give them that opportunity to have their voice, but in a way that's sensible and also make them aware of the reputational risk and the consequences. One of the things that keeps count these days is where people's profiles say, you know, my opinions are my own, but then they state their company well, I'm sorry, you're putting yourself and your organization at risk by doing so. There's a reputation, reputational connection and therefore you have to be mindful of what you put out there if you're going to wear both banners.
Tim Hughes [00:06:11]:
I was talking, I did a talk at a AI event recently and one of the people on the panel, she, she works in the, in the, in the field of cyber, and she was talking about one of the companies that she worked with and she said that didn't matter how much she told people not to click on links. And I know Adam says, but clicking people. But people still do.
Giles O'Halloran [00:06:37]:
Yeah.
Tim Hughes [00:06:38]:
And she said that 10. She says she, they've, they've run course after course after course after course, and people are still clicking on links.
Giles O'Halloran [00:06:46]:
Yeah.
Tim Hughes [00:06:47]:
And she said that they've actually caught people where, you know, they've done tests at the end and they, and they basically sit there and, and get the intern to do the training, whatever, and, and they still click on the links. So it's one of those things where, you know, in terms of training, how do you make sure, you know, it's a bit like the diet, isn't it? We all know that we need to eat less and exercise more, but we don't do it. And so how do. Is it, how is it that we can actually get people to understand that they need to stop doing this?
Giles O'Halloran [00:07:20]:
And that goes back to. I think some things you've highlighted there, is that it's understanding human behavior because you've got to think like the attacker. You know, if we talk about it from an intelligence perspective, we can use what they call OSINT or open source intelligence to go and gather information about organizations. And there's so much stuff online that is geared up towards a phishing attack, which is what you're talking about whereby they'll target individuals who they know will click it by their behaviors, by their interests. And it was only a couple of months ago I posted on LinkedIn whereby there was one particular group who were actively targeting HR and hiring managers on LinkedIn. By looking at the details they. Because if they clicked it on, clicked on, sorry, went bang. We'll go for this, you know, it's a CV or resume, et cetera.
Giles O'Halloran [00:08:04]:
They know they're gonna, they're gonna go for it. So that's the kind of thing they look for. So I think we've gotta be careful about what we do. So it's about sort of mapping the potential risks that are out there, identify them, then develop some of the training and guidance around to do so. Do some testing. I mean even the cyber security world, they call it called sandboxing. Let's test some of this. Because people learn through the mistakes and create opportunity because people will make mistakes.
Giles O'Halloran [00:08:26]:
You're not going to get it right every time. So it's about sort of creating those kind of measures that then lead to metrics. We can look at the risk metrics that can look at how do we need to focus that learning. Because more often that we do a piece of training that covers all aspects and that's like basic training, a boot camp. What we're not covering is key aspects. And until we learn from the threats that happen and measure and mitigate them, we can then almost relearn and adjust our learning so that we're focusing people on what they need to be doing. Because otherwise those threats always going to exist. If we look at how an intelligence organization might do it, I mean they will look for things like the employees role, their routine, their location.
Giles O'Halloran [00:09:05]:
They'll look at the customers and the clients they work with, potential supply chain. They look at what IP they share, what links to organizational strategy and that helps them identify what we call targets or HVTs, high value targets that we can potentially then gear a phishing attack or even a spear phishing attack, which is very directed at what they do. And as a result that's where this problem lies. So it's getting people to understand it's not just about what to click, it's the importance of what phishing is, spear phishing in particular, but also what you do around something like social engineering where it's looking for the clues in an email to say before you click it, take a step back. I mean if you don't know, I get loads of stuff as people do these days on mobile Phones and oh, you know, someone is a number comes out you don't recognize, don't even open it. If you don't recognize it, don't open it. That's the first thing is, you know, sandbox it or delete it immediately because if it's really important, someone will probably contact you otherwise. But I think you'll be very careful what you do.
Giles O'Halloran [00:10:02]:
So it's about being mindful to map that and identify those risks. Train people, make sure you're refreshing and targeting the training where it matters. But don't be scared to learn from that. Because if you create an ongoing practice where people know where they need to go, what they need to do, who they need to talk to, you're creating almost that psychological safety which allows eventually creates a security culture where people are sharing their knowledge more dynamically. Even Sarah Armstrong Smith, who's the current chief security advisor to Microsoft, said that 80% of all risks around phishing are human related in terms of people clicking on it. And that's where most of the attacks lie. So if we can get people to understand and continually practice, even with business continuity practice, all that kind of stuff, it matters because organizations these days have business continuity plans. But what are they tested? They never are.
Giles O'Halloran [00:10:50]:
And this is where people like organizations like HR can be part of it to facilitate these kind of testing environments so that we're learning, we continue learning and understanding what the risks but also what the potential effects are as a result if we don't train and continue that.
Adam Gray [00:11:04]:
So I. That all makes perfect sense. But, and there's always a but, isn't there to these things. But if you take your role as one of your roles as an HR professional where you're trying to identify good people and get them into an organization and that's a very outward facing position, particularly if you've got open roles that you're looking to fill. So when you start to get links that come in or emails that come in from people that you don't recognize that may have heard through their network or indeed your network that there is an open role or may think here's a company that I'd love to work for and I think I would be a really good cultural fit with this organization. How does the person and I say hr, it could be any outward facing sales role, for example, within an organization. How do I as the salesman or the HR director differentiate between a genuine link that may provide a fantastic opportunity for me and my organization or something which is a huge risk because often these things are very similar in terms of how they appear, aren't they?
Giles O'Halloran [00:12:18]:
Correct. And that's where it's encouraging people to be genuinely interested in the organization. But point them and it's like a point and shoot, get them to go to the direction where they need to go in terms of whether it's an applicant tracking system, ats, some kind of HR software which can provide that layered defense to make sure those phishing issues, etc controlled. That's your first line. I think that a lot of people think if you apply through an HR system, oh, it's devalued. Well, no, it's protecting the organization and the individual as well in terms of their data. Because the other thing from a GDPR perspective is individuals can share their cv. But once you've said to someone in organisation, how do you know it's been handled effectively and it's been managed effectively.
Giles O'Halloran [00:12:55]:
There are two sides to this risk. I think sometimes we've got to balance and therefore I think first and foremost is having some kind of systemic way of collating and measuring and understanding the sourcing, which can have an IT security platform associated or some kind of management tool that minimizes the risk, making sure that people are aware that if they get a particular cv, then making sure that that gets sent to a secure box sandbox where it can be tested, et cetera, a recruitment box, again with IT security background. So I think there are ways of mitigating that risk if you do it properly. Oh, goodness.
Tim Hughes [00:13:31]:
So Andrew Slasser, he's, he's basically asked the question, what's a favorite hackers?
Adam Gray [00:13:36]:
What.
Tim Hughes [00:13:36]:
What's a hacker's favorite season?
Giles O'Halloran [00:13:41]:
Very good.
Adam Gray [00:13:42]:
Yeah. So how do you, how do you advise a leadership team to, to navigate these potentially dangerous waters? Because, you know, part of, part of the value that you have in having great employees within an organization is that those great employees can go out there and mobilize their networks to generate more visibility and sales opportunities and all of this which requires them to be active, to be engaging, to be in conversation and dialogue with, with as many parties as they possibly can to push the company and their own visibility out there into the marketplace. So how do you manage that? Which is what high growth, highly effective organizations do, how do you manage that with this idea of kind of locking things down so that you're in like a diving bell and you've got limited access to the outside world. So one of them is obviously potentially high reward but also very high risk. One of them is potentially zero risk or very low risk, but also potentially very low growth. Or very low benefit. So how do we balance these two things?
Giles O'Halloran [00:14:55]:
Well, it doesn't happen overnight. The other thing is, you know, you're going to learn through the process, without a doubt. I think it's. It may be a path that is engaged with experts, like your good sales, of course, but more importantly, I think it's. Look at some of the baseline stuff to do, such as privacy settings, making sure you create a shared culture internally, some training, some learning, tap into internal comms, your marketing team to say, look, what is the content, what are the key hashtags, things, imagery, etc. That we can share out there? So we give that continuity, message without copy and pasting, so that they understand that, but also understanding how one person can potentially release, but how the team can amplify. Well, that can be really interesting because you've got sort of one release, but multiple people then sharing the same message, which is, we know, secure and safe. But I think that what it comes down to as well is it's not just a leadership approach.
Giles O'Halloran [00:15:41]:
You're creating a single point of failure. The security always, you know, security issues always come from single point of failure. So I think that the opportunity here is for leadership to engage and connect capability across the organization, to liaise with it, liaise with hr, liaise with sales, to look, how do we create the best practices? Well, I say good practices because they're always evolving of how we do this and it's going to be a moving feast, effectively, in terms of having those conversations ongoing, so that you come up as a team with relevant communications, relevant content, and you can share across the different teams and leverage that, amplify it through those methods as well, and as a result, maybe create more of a. An inclusive, yet more protected, but also far more social. Because you're working as a team, internally, socially, to produce the right messages, the right media and the right content. But in the same like, then you can share it much wider with the audiences and amplify effectively the networks that you have.
Adam Gray [00:16:33]:
Yeah. So one of the questions I was going to ask is when an employer makes a mistake on social media, what does a reasonable response from the company look like as opposed to something which is purely legal or policy driven? And I think this is particularly pertinent today given what happened yesterday in the Coldplay concert with the guy from the CEO. Yeah. Which obviously has caused at the very least a degree of embarrassment. I guess there's no such thing as bad publicity, so there is some benefit to the organization. But how do you. How do you kind of reel in people or rein in rather people that have made a mistake on social media. And how does that roll out in terms of learnings and you said at the beginning not punitive measures.
Giles O'Halloran [00:17:27]:
Yeah. So this is where straight away I'd argue that they were not that social media is not the issue because the recording was amplified through social media. That's a different scenario. These individuals act in the way they did, you know, there's an element of human agency, without a doubt and people have make choice choices and they have to live by them, of course, and that's the problem there. But also going back to that, I don't think social media effectively was the cause because just amplified what was seen, you know, that's what we saw. As for how that's then dealt with, then that's going to depend on the organizational's particular values, their policies, how they do that within legal constraints. And those legal constraints also will differ because if those individuals are based in the US or the base in Europe, or they're based in Africa, Middle east, they can have different employment law controls around that. So you've got to respect those as well.
Giles O'Halloran [00:18:17]:
But in the same light it's, you know, well, even though I think the CEO came out and said, you know, sort of throughout there, I know well, this is my privacy has been, was been targeted, etc, but there's some truth to it, you know, to what extent you, you reveal it. But at the end of the day it's about understanding that public image and you wear that out the organization. And when it comes to punitive responses, yes, they're going to have to be some conversations, but how far do you go when this is in a public setting, there might be a senior leadership or this is a very private relationship, how intrusive can you be and to what extent and how much does that represent a risk? I think again needs to be taken into account within each legal limit that's required by the country. Also, whatever are the ethics and practices and the way I always look at it and help managers from a risk assurance mindset is there are three layers to look at any way you deal with any situation, whether it's social media or otherwise. As a simple sort of coverall, the three layers I look at, which is effectively three layers of assurances, first and foremost, legal look at what is the legal scenario, how should you deal with it? Is there a breach, whatever, deal with that. Because if you stay legal in everything you do, then you're protected. The second layer of defence is ethical and ethics change dependent on the context, the environment or even potentially the culture. So as a result you have to understand from an optics perspective what is the ethics.
Giles O'Halloran [00:19:41]:
And if you maintain that you're legal and the ethics of your organization, their values and you're keeping to those, that's what's important and you hold people accountable to them. The final layer breaks into two, which is what's called reasonable. And if we look at reasonable within the law, mainly UK law in this context, but I think it does reach further. Reasonable is if you ask the average person in the street what is the most likely response, that's what we call reasonable. So if you think about what should I do as a result, what would be reasonable, then again that matters. But reasonable is highly contextual as well because what you might think is appropriate in one culture or country is not synonymous with another in the rest of the world. So again it's highly contextual. We've got to be thinking out there.
Giles O'Halloran [00:20:22]:
And that again adds risk in a social environment, social media, because you might do something UK and it becomes amplified globally. So that breaks down in terms of the one side of it in terms of reasonable. The other side of reasonable is the financial bit in terms of if we have to do something that make a response, then what's the financial risk outcome, what can we measure? And that again we need to take into account because that then can actually evaluate what we should do particularly as a response to the say this is the likelihood of damage, what can we do to defend it or what should we do in order to protect the organization?
Tim Hughes [00:20:57]:
Because if we take the, the Marks and Spencers the situation, it looks like that was an internal person that caused that and in fact it was probably a supplier to the, the organization.
Giles O'Halloran [00:21:10]:
Yes, it looks like, it looks like it's, it's a third party and phishing risk.
Tim Hughes [00:21:14]:
Yeah, yeah, well it looks like they collaborated with the hackers.
Giles O'Halloran [00:21:23]:
Yeah, that's a game where this is where you have malicious internal. That's a disciplinary offense without what actually is criminal. And that's where HR don't necessarily would manage this. It might have to hand it over to the police and do it appropriately. You might have then a parallel process. See where this individual is appropriate to work the organization go through the relevant, an appropriate investigative and disciplinary process. You don't just go right, you're out. You do that to protect yourselves.
Giles O'Halloran [00:21:48]:
And that can be run in parallel. So that particular case, I mean so far it's, it's worked at least what, 300 million if not 750 million market cap value I think they're predicting in terms of what that looks like. So that's a significant whack. But again you've got to manage the outcome of that appropriately because it's a criminal and also the internal case. And you have to manage that through your own processes within the law. But in the same light you've got to then look at what your cause of action, what's the opportunity in terms of training, looking for people. Because phishing attacks not only target people, but when it comes to internal threats, sometimes Amazon had the same problem as well not so long back that a manager identified that they couldn't do this alone, they had to recruit others in order to help them create, create some kind of internal problem, which is what they did. And as a result this is where again dispract action, criminal action etc be taken.
Giles O'Halloran [00:22:40]:
But again you've got to have the right investigative background and do it through your own processes otherwise you fall foul of the law and also potentially at tribunal.
Tim Hughes [00:22:47]:
So Mark Carlson basically has put in the comments, why did the hacker break up with the firewall? Because it was too possessive and kept blocking all their connections.
Adam Gray [00:22:58]:
But, but yes. And he's always one for, for joking. I, I do wonder what shirt he's wearing today incidentally. But he, he said this as a follow on to this comment that he made. He spent the last week arguing with his bank over sanctioned payments as they could not receive any emails or attachments due to firewalls. Wasted all of their time and, and cost money and not for the first time either. And I guess that this is, this is the risk, isn't it that I'm, I've got a very open kind of policy in terms of sharing stuff digitally.
Giles O'Halloran [00:23:37]:
There is, you've got to respect their regulatory requirements from their side in terms of financial conduct etc. There are so many rules now that even my, my other half who's who was American born when she set up an account she then suddenly they said well you've got to check with the US because you're born in the US Etc. Well no, I've lived all my life here. I'm naturalized British, I have no connection with the US but they have to follow their rules and follow it because that's what the regulations say. Unfortunately that's where some of that comes in to protect individuals as well as the bank. And it's, I know it's a, excuse the word bull ache but at the end of the day that's sometimes what has to happen is to protect your Stuff and I know it doesn't make life easier, maybe AI in the future will help us with that, but who knows?
Adam Gray [00:24:18]:
Yeah, so AI. So an interesting thing, an interesting thing here. So you obviously have spoken about AI in recruitment and how it can be used as a recruitment tool. So, so how can this lead to bias? You know, so, so you explain to companies how they can use AI to improve their, their efficiencies and, and practices. But a. What biases can AI introduce? Because I guess that we're, we're as a, as a collective, the business world are being fed two stories about AI, aren't we? We're being fed the fact that it's, it's the solution to all of our problems. Massive efficiency, savings, plugging skills, gaps in individuals and in organizations. That's the good side.
Adam Gray [00:25:08]:
The downside is that AI won't let itself be turned off and it lies about things and it's, it, it's manipulate, manipulating ourselves and is a real risk. You know, everyone's citing Skynet and Terminator and all of that kind of thing. So we've got, we're kind of stuck between these two things now. We know that AI is here to stay. But, but what kind of biases are you seeing in terms of how AI is, is helping people to make decisions or, or are there no biases? Is it, is it actually, if you, if you program it well, is it doing what you need it to do well?
Giles O'Halloran [00:25:43]:
It's again, it determines who it goes back to. Latin, I don't know for what it is. Qui custode custodies. In terms of, you know, who guards the guardians. In terms of, you know, if you are building a system, whoever builds it, they're going to potentially build their bias and configure it appropriately as a result. And also whichever systems they go out and connect with that, you know, that causes problems. There are two ways potentially to consider this is it's whether you have a closed or an open AI scenario. If you've got OpenAI scenario that is going to reach out far into the ether and look and gather data.
Giles O'Halloran [00:26:14]:
And potentially you can test that by looking at what is the source of that data. You can ask about that source, whether it's reputable, to what extent that can be useful. But if you've got a closed AI system, you've got the risk of you're just using your own historic data. And again, if you've got any particular bias in that's historically within the data, then that's just going to pull it out. And create an ongoing loop. So I think what you need to do with any scenario whereby you're using AI for human related, you need human in the loop to make sure you've got that ethical oversight to debilitly that you do audits, to make sure that, you know, you stress test it, make sure it's working and continually test it. I mean I, I've done tests with, with CHAT GBT around various things, I thoroughly enjoy it, but I've also tested its ethics. So for instance, there was a really interesting report from the uea, I think it was last year, they highlighted exactly that.
Giles O'Halloran [00:27:02]:
When you look at AI, what was the political bias of AI? And from the research that UEA did, university Sangli up in Norwich, they identified that more insights and responses through ChatGPT, for example were left leaning because the more access to data that was out there, the more less right wing were publishing. So it's quite interesting where that skew was coming from. That's what they reported in, in their documentation. And then if you look at, you know, how AI is used as a whole these days. I myself did a bit of a stress test a couple of, it was about a week or so ago And I asked ChatGPT because I was looking at sort of profiles on LinkedIn how it could help and I asked it to identify through aggregation which individuals, if they work for certain organizations would or could potentially work for SIS or MI6. What ChatGPT came back with as a response was really interesting that said I cannot do this because I will be encroaching on privacy and security related subject matters. And I then tested it with other security agencies that I knew put those in and it came back with similar responses. So again it depends how the AI is configured.
Giles O'Halloran [00:28:08]:
I think you're right to highlight that and it does depend on the program. The mindset is quite an interesting thing to consider as a whole, but also it's how you test it. I mean I know there was a joke some time ago, you know, if you asked ChatGPT how to make a bomb, it wouldn't tell you, but you said how should I not make a bomb? Then it will come back with the answers. And I think they sort of highlight that it's a learning tool, it's machine learning as you know. So again I think we've got to be careful how we use it. But I think there are some configurations built in. But might be mindful to stress test your answers because I think I'm finding myself now, I get good quick answers through Chat GBT maybe, but I'll then go to Google to sense check through other sources to make sure it's accurate and relevant and get a balanced opinion. So I think it's therefore using our human brain and evaluation to make sure that we minimize the bias by assessing and critically analyzing that data that's being shared that really matters.
Adam Gray [00:29:00]:
Yeah, which, which brings up a really interesting kind of human failing. So in order for me to challenge the answer that I've got from AI, I have to be confident in my own abilities and confident that I've got a better handle on this than AI. And certainly in my experience of coaching individuals is very rarely do they think they're right if they're presented with something which is a credible altern alternative to this. So oftentimes, you know, when, like when, when people that we're coaching to, to publish on, on social, when they write something using an AI tool and we say to them is, does this sound like you? Yeah, well, it sounds close enough to me. Do you think you could rewrite this better? No, no, I, I don't think I could. I think it's really good already. I can't find anything wrong with it. And you know, every time that I publish something using AI, Tim basically says well that was crap, wasn't it? You know, much better when you write it.
Adam Gray [00:29:58]:
And I don't profess to be a great writer, but it's very interesting that many times people don't believe that they have the competency to challenge these things that are in quotes, fact. So how do you go around upskilling people within business, particularly maybe the HR area of business, to believe that they are better at sifting CVs and they are better at making determinations about people that are applying for jobs than the factual tool that they might be employing.
Giles O'Halloran [00:30:34]:
So you ask a couple of questions there, which I think is really important, but finalised with one around how HR can apply it. There has to be some form of learning and exposure to the AI because we learn through doing. So I think it's about how you use, to what extent a learn as you go and that means also how do you position AI? And this goes back to this mindset of being cobotic. It's not that AI should necessarily replace humans, it's working alongside it. And how do we dovetail our capabilities effectively? That's what matters. And this is where we change the dynamic from focusing purely on AI to what I like to call ia, which is the intelligence augmented, which is that we're talking about how could it help us hone our capabilities but also appreciate that human talents matter? And therefore it's, it's having an opportunity to provide some kind of training around how to use valid AI tools you have, because they are different, they're not all the same. And there's, there's platforms out there now where you can actually create your own AI tools and bots to help you. Almost like an AI toolkit.
Giles O'Halloran [00:31:29]:
Some fascinating tools are out there, but in some ways it's how to use it, but also how to stress test it and appreciate is you learn together. Because even with ChatGPT when you use that, if you sort of hone its answer said have you thought about it? It will come back. So well, thank you for that then it helps me because that's about the prompt engineering side. So from an HR perspective and whole across organizations, there has to be an element of learning, ongoing exposure to and learning through, but do it in a safe space. But going back to your point about, you know, where people make choices themselves, this is, goes back to the really interesting questions around agency. An agency is where we make a choice and sometimes we'll make a choice that might be wrong. Sometimes we make a choice right, or sometimes we make a choice somewhere in between the two. We navigate our way forward.
Giles O'Halloran [00:32:14]:
Machines, because they're being built like that, will have that similar format. They'll be configured to a certain way of thinking that's been programmed, configured through machine learning. But again that's going to make mistakes. So maybe it's an opportunity again going back to that exposure to and learning with AI so that both sides learn from each other and build that capability that matters. And I think you're right to highlight that sometimes, even though AI might save us time and that's achieving efficiency, it's not necessarily effective. That's input output versus input outcome. What I have found in particular with someone I know, I've been coaching as a CFO with an organization, working with him. He wrote an article and he could have read to me with AI, but he wrote it himself and as a result it was far more powerful because people who knew him through the network knew it was just genuine, authentic and came from him.
Giles O'Halloran [00:33:02]:
And that in itself I think was far more, you know, the receipt of that by his audience was far stronger.
Adam Gray [00:33:08]:
Yeah.
Giles O'Halloran [00:33:09]:
And yet he lacked confidence in himself.
Adam Gray [00:33:11]:
Well, that, that's, that's the thing, isn't it? And there's a, there's a huge amount of empowerment that is required for people to believe that they can do a job better than these tools. You know, we see it time and again in terms of how people engage their audiences and it creates, certainly from our perspective, when we look at this, creates a huge number of paradoxes. You spend a huge amount of time getting on somebody's radar in the sales world and then you use your bulk standard scripted outreach that's built into the tool as a sample script. Oh well, that'll do because I can't write anything better than that. Where actually you've spent so long getting people into your, I was going to say funnel, but it's not even that. Onto the conveyor belt of how you're going to process these people as leads. And yet the very bit which is the determining point about whether they decide that they're going to listen to you speak or not is the bit that you can't be bothered or don't feel competent doing this stuff with. So you said about people needing to roll up their sleeves and be trained to use AI, because we only learn through actually using this stuff.
Adam Gray [00:34:19]:
So how many organizations out there have actually got a formal, formal AI training program for their employees or staff or people that are around the organization? Because it seems to me that the ones that I, I'm talking to, it's very kind of ad hoc. You know, people go off and do their own thing and perhaps they will be thinking about engaging somebody like yourself, you know, someone that understands AI and is able to show them. Not necessarily from a risk perspective, but from a using perspective. Here are ways that AI might better help you do these, these jobs more efficiently. But they're still in the very early stages of this. You know, there are very few, or seem to be very few formal training programs out there and lots of training seminar type things that are pre having anything built, you know, so they're, they're really testing the water. So how many organizations are actually taking this seriously in your experience?
Giles O'Halloran [00:35:25]:
I think there are a lot of senior corporates that are taking this experience seriously and they're trying to build some form of program or they're trying to integrate AI in some of the leadership roles. We're seeing some C suite level roles being combined to look at people and the commercials, etc. And how that works. But I don't think it's become an effective trend. And what they're using is more a game for effect efficiencies and how to use a tool internally. But that's a starting point going back to your question or point to make about salespeople, for example, and how they Might use the same scripts and etc. I think encouraging people or training people to use the tool effectively instead of saying, you know, how do I use this script? Maybe saying, I'm approaching this client. What are the key things I potentially could ask or consider or talk to them about or service them.
Giles O'Halloran [00:36:11]:
Based on this information, AI can give you the tool and then have a call. Don't just send that and copy and paste it. Have a call, make that human connection. That's where the two things matter and combine the most. But I don't think there's enough organizations doing. I came off a call earlier this morning where I'm working with a couple of other individuals. One's a doctor of AI, the other's someone who's come from a background from Microsoft, Google, et cetera. And we're looking at building some kind of training opportunity around this whereby it's helping organizations understand not just the impact, but also the, you know, how do you build ongoing AI capabilities? Because at the moment, yes, there are a few organizations that are looking, but they are very few.
Giles O'Halloran [00:36:50]:
I think it's more like a effectively have a go and play copilot. It seems to be on the back of most Microsoft packages and they're buying the license for that, but no one knows how to use it. But there are some interesting organizations out there. So the call I had there was a, an AI developer that has recently changed his, pivoted his business model, become an AI trainer because he's realized that there's a huge market out there, how can he do a lot of capability? So I think at the moment you're going to see some start, you know, some moves in that direction. But the problem is if you've got your L and D in your training team who aren't AI capable, then there's a gap in that capability that they can't teach it. So I think initially it's about going to external partners who might be specialist in the field and contracting them in, but also training and developing your L and D team to support. That's the, the only route I'm seeing at the moment that might be viable as a starting go.
Adam Gray [00:37:37]:
Yes. So we've got an interesting question here from, from Lawrence. At what point does human advantage come into play as AI becomes so common? Labeling humans versus bots and that context will come to come to messaging status, for example, this is a human typing now, so how do we manage that? That. He's called it a deception gap. You know, if you want to speak to a human, I guess the, the.
Giles O'Halloran [00:38:05]:
Question is, yeah, it's a perception, rather deception, but I get what you mean by it could be deemed as deception.
Adam Gray [00:38:09]:
Yeah, yeah. So, so, so, so how, how do we differentiate between these two things? How do we, I guess, leverage the, the human value of what it is that we do rather than this, this, this huge potential influx of AI content and engagement points? Because I guess that one of the big issues is that for many people, a barrier to them producing content for themselves to distribute either through blogs or through, through social media. One of the issues is that they don't feel that they've got the competency to do it or they haven't got the time to do it. Now those, those barriers have been removed. So we're likely to see a huge increase in the volume of content being produced, aren't we?
Giles O'Halloran [00:38:59]:
So I don't, I mean, if I had the answer, so I'd be a billionaire, there'd be no doubt to it. And the philosophical question I quite like as well, because that goes back to do two negatives make a positive.
Tim Hughes [00:39:09]:
You want to bring the philosophical question up from Lawrence as well.
Giles O'Halloran [00:39:11]:
Adam is an AI agent and a deep fake. Both a deep fake. So this is where again, well, part of the equation is. So maybe it is. And again, do two negatives make a positive? And there's lots around philosophy and logic there, which is quite interesting, but in terms of what point that advantage, again, I think it's very hard because it comes down to the individual and it can be simple things that we know. If it's human, if there's a typing error, you can tell someone has potentially copied and pasted stuff from chat gbt, because there might be a two space and a period or a full stop at the end. Usually that's a signal that someone's copied and pasted straight away, little things like that. But I think it's hard to say at this moment in time what the difference is unless you know the individual.
Giles O'Halloran [00:39:55]:
And that comes down to human connection. If you want to build human businesses, there's going to be human connection. And even if you lose LinkedIn and we use AI as a tool to support us, anyone on this call will probably know that you might have thousands of contacts with LinkedIn, but you don't know them all, you will remember those people that you have met, the physical connection, that human, human, that tribal thing. And I think that's where sometimes we're focusing too much on how can we do this, how can we do that? Well, no, let's just go back to what we do really well as humans, what have we learned through centuries, if not millennia of what we do? And the fact that we're trying to build AI that still isn't as complex as a human brain. We still don't understand the human brain. So I think this is an opportunity for us to rather focus on what the AI could do or what it could become. And I think this is my own opinion, no one else's per se is I think we're arrogant as a species to say we know what's going to happen with AI I think it could be negative, it could be positive, or I think there's a Buddhist third way somewhere in between. If we do it right, but we're learning as we go.
Tim Hughes [00:40:55]:
But if AI drives the economic value of intelligence to zero, that everybody basically is intelligent, therefore it's what makes us different. That is the thing that is going to differentiate us. Because otherwise, because AI basically means that we're all the same, we all have the same intelligence.
Giles O'Halloran [00:41:13]:
No, no, I disagree because it's again, it's how AI has an application. If you're looking for knowledge and things like that, fine. But you look at can AI do plumbing? No. Can AI do other key crafts, etc?
Adam Gray [00:41:27]:
No.
Giles O'Halloran [00:41:27]:
What's more is if AI doesn't have the data because it's never come across it, I. E. It's novel, you know, it doesn't have the human background. We grew up, no disrespect to any anyone of faith or belief on this, on this call, but we as human beings evolve from apes whereby we were poking termite mounds with sticks to try and get. How does that work? How do we get a termite out and eat it? AI don't. Even though it has. Machine learning has that. It still has to be exposed to data in order to learn, but it still hasn't had a physical side.
Giles O'Halloran [00:41:56]:
So I think there's. We are trying to marginalize human capacity to purely knowledge, not its application. The physical things we have as human beings, which still have value and could redefine what some of the work we do is going forward.
Adam Gray [00:42:11]:
Yeah, absolutely. But I do think that. No, I was just, I was just going to say there's a lovely quote and I forget who it was that said this. They said, I don't want AI to do paintings and write poetry for me, so I've got more time to do the washing up. I want AI to do the washing up, so I've got more time to do paintings and write poetry. And I, I think that, you know, as we see AI being applied so often. We're seeing it writing content and creating images for people.
Giles O'Halloran [00:42:44]:
Yeah.
Adam Gray [00:42:44]:
Rather than doing the donkey work.
Giles O'Halloran [00:42:48]:
I think that that's moving to agentic AI was what we're talking about, that that's where we use it as digital labor. And it's, you're right, it's how we discern a move between the two.
Adam Gray [00:42:57]:
But I mean, this was a really interesting, from, interesting comment from, from Lawrence about his LinkedIn outreaches. And, and so often the, the messages that we get from people, it's quite difficult to determine initially whether or not these are mass farmed bot comments or whether or not these are real comments from a human being. Now, obviously, as you get exposed to more and more of these, you get a better handle on this. But I'm thinking here about people that are not used to getting lots of messages on LinkedIn. So all of a sudden they're starting to get some messages and they think, oh, this is really interesting, I'd like to engage with this person. And actually, you know, for the more sophisticated user, there may well be triggers that make you concerned. You know, this is somebody that's so far out of your network and they're sending you a message which is quite an intimate message about how they, they're loving your posts and yada, yada, yada. So I think it's quite interesting how, how, how do people know for certain, I guess is the question, how do people know for certain whether or not this is a person that they're having a conversation with or whether this is something which is beating the, the Turing Test.
Giles O'Halloran [00:44:08]:
Yeah. And that's where again, maybe it's come down to human intuition, which you can't put into a machine, which is. Understand that if I don't have a link to this person and it's suddenly come out of nowhere. Let's be a bit cautious here on where it's.
Tim Hughes [00:44:22]:
And they've offered me, and they've offered me a million pounds with no.
Giles O'Halloran [00:44:26]:
Well, we did. We saw that in the 90s with various emails from, you know, from countries in Africa offering us lottery wins, etc. So again, it's that human situation, that this isn't the normal state of things, then there must be something. We should be curious. It's a magic word. I think we should. It built us as a species, you know, we are fundamentally clear. So let's be curious about asking the why do I think I've got it? Where does it come from? And what's the intent of the individual? And if it doesn't feel right, it probably isn't right.
Giles O'Halloran [00:44:56]:
I'm not saying it should end the conversation, but again, we have to be careful there.
Adam Gray [00:45:01]:
Yeah, yeah.
Giles O'Halloran [00:45:03]:
So coming back to, you know, security as a whole, if you don't understand that email coming through, it looks random. Probably is. Might be a risk.
Tim Hughes [00:45:12]:
Yeah.
Adam Gray [00:45:12]:
Which is exactly going to be my next point. Kind of coming back to the. This whole kind of security and, and safeguarding within an organization. How do you go about training the larger team about what looks like a risk and what doesn't? Because I think part of the challenge is that there will be people that in quotes get it and people that don't. And the people that get it are the ones that will be driving this. We need to be more secure in many instances. And the mere fact that they understand what some of the risks are, and I don't profess to understand many of the risks, but I understand some of the risks. Which puts me in a very different position to my mother who's 90.
Giles O'Halloran [00:45:53]:
Correct.
Adam Gray [00:45:54]:
Who might click on any old link that comes in an email because, oh, it's not somebody I recognize, but they want to have a conversation with me. Okay, so how do we start to instill those basic building blocks of what's safe and what isn't in a way that doesn't require us to build training courses, get it involved with building sandboxes and secure mailboxes and that kind of thing?
Giles O'Halloran [00:46:18]:
So what was your question again, Adam? It seemed to be quite convoluted. Yeah, let me talk to the human.
Adam Gray [00:46:27]:
Isn't it always quite convoluted, mate? No, the point is, how do organizations, and I'm thinking particularly about small and mid sized organisations that haven't got a huge IT function. How do they start to think about doing this in a more secure way that doesn't impact on the running of their business?
Giles O'Halloran [00:46:43]:
Yeah. Okay. So first and foremost, going back to your point about sort of different generations and how they might in reality. Don't blame people. Sorry, don't blame the technology. It's people, it's about behaviors. Because before we had, you know, the AI and how that becomes a risk, etc. We had people doing phishing, emails, emailing, we had phone scams.
Giles O'Halloran [00:47:06]:
You know, my great aunt who passed away unfortunately last week was subject to multiple phone scams. Oh, someone talks to me, etc. And before you know it, you know, they're taking the money from their account. Before that, you know, we have people making prank calls, you know, on phones, etc. It's not the technology, it's the people behind. We have to understand it's behavior driven. So when it comes to sort of small organizations, it's understanding human behavior. And I think it's, you've got to, you can't just wait for something to happen.
Giles O'Halloran [00:47:32]:
It's look to how you can train and look at opportunities for your people and whether it's engaging a third party externally. Look at your own internal capability. Even as a small organization, it's still a risk, it's a significant risk. So it's identifying from your own experience. Otherwise it's looking at the types of threat you might be there and what type of threats are out there, what they need to do in terms of the types of employees involved. Because again, most organizations have more than one type of employee, might have a full time employee, full time contractor, contractors, freelancers, well, they're all affected, but also the nature of the work they do, where they do the work. Because if you're working from an office, that is one threat environment, but working from a coffee shop, that's a completely different threat environment where there are risks, etc. Which are associated.
Giles O'Halloran [00:48:15]:
So providing guidance around that. But they need to know the right process, the right policy and the right people to reach out to when things go wrong and provide that guidance, provide it as simple as possible so you can effectively minimize and mitigate that risk. I think those things that happen, those things that matter, that will prevent things happening. But it does mean being proactive. I don't think it's, it's necessarily one thing because ongoing, you've got to refresh that. You've got to look again, build metrics to look at. Where are these risks coming from, where they're most likely. How can we build some, maybe some learning or specialists around that, especially support.
Giles O'Halloran [00:48:48]:
Can we gauge a third party spike? There are lots of different answers to that question whereby it will depend on your budget, it depends on the potential risk and also what the level of impact is to your business. Because people always talk about these risks, but you've got to quantify them. Any risk has to be quantified and qualified before you mitigate it. And so therefore, I think there is so much in that question which could be multiple different things. We'd have to identify what was most likely impossible and probable and therefore mitigate against that.
Adam Gray [00:49:20]:
Yeah, I mean it sounds, sounds to me like people should talk to you to get.
Giles O'Halloran [00:49:24]:
No, I'm, I'm, I'm an enthusiast, not an expert. I always, always say that because it's it's an area, it's always fascinating because of my background and I think this doesn't go away. IT security was an issue from my own experience. You know, I showed how even back in my days of recruiter, how easy it was to fish for information. I did this on non company systems with an organization I was working for because I had to recruit IT security people and we couldn't find enough of them. So I set up a magical profile, gave it a name, a picture, etc. And went out to find out, have chats on Yahoo. Messenger and gather lots of information about the IT security market purely because I made the profile female because there are very few females in the IT security environment.
Giles O'Halloran [00:50:10]:
Say every IT security person out there want to share their stuff, et cetera. I gather so much information to go, this is what the IT security market's like. It's so easy to do. And I did that on the, you know, from my own experience right before I went into hr. But I went to that to show how easy it can be to actually gather this kind of information and present it back and go, this is where your potential risks are. So I think this is where organizations of any size, if you are concerned about the risk, then either talk to people, quantify that risk, put in some kind of risk framework or matrix that you can manage, whether that's provider having IT internally or working with external providers. You have to look at what size, budget, capability you have. That really determines that.
Adam Gray [00:50:54]:
Yeah.
Tim Hughes [00:50:57]:
So in terms of Lawrence's next big comment, I think many consumers are starved of realness and many run through a scripted interactions with back of head, eyes rolling. I, I think, I, I think what he's saying there is that the, you know, the, the biases come to play. You know, if someone sends us an email that says here's a million pounds with no strings attached, immediately what happens is that people start thinking, wow, what? Well guess what? I could do with a million pounds rather than thinking this is out of the ordinary and I should just delete it.
Giles O'Halloran [00:51:34]:
Yeah, that's a pity. The feed the greed kind of mindset.
Adam Gray [00:51:36]:
Yeah, yeah, yeah. And I guess that to a certain extent, you know, we all, we all hope that we're going to win the lottery. We all hope that our post is going to go viral. We all hope that these things that solve all of our problems are going to be true. And one question, and probably the last one which got time for today, was a particular question that Bertrand wanted to ask which was how can an organization foster a Culture of trust that encourages employees to report their own digital mistakes rather than hiding them for fear of punishment. And I think that's a really big. You said about learnings being really important and learning from things that you do wrong. So how do you make that an acceptable.
Giles O'Halloran [00:52:21]:
So that's interesting one. Just prior to that, just in terms of the realness, I think going back to that, I'll answer that question, but going back to Lawrence's comment, realness, I think one of the things interesting. I put a post out just this week recently to freelancers say, you know, hold fire, don't panic, the market's not brilliant at the moment. It's coming to summer. People come back and respond and say thank you for that because I felt alone. And that was that realness that, you know, you can put out there, be exposed and say, this is reality. You know, a lot of people are doing that, which is good. I think it's healthy.
Giles O'Halloran [00:52:52]:
Just because people are posting social media doesn't mean everything's shiny. And so therefore it's about giving people that support. Going back to your question around providing that was it. Say again.
Adam Gray [00:53:05]:
So he said, how can an organization foster a culture of trust that encourages employees to report their own digital mistakes?
Giles O'Halloran [00:53:15]:
So we would call this an inclusive but supportive security culture. That's what it is, whereby it is about first developing, exposing people to the training to get them to learn, but understand that if they make mistakes, these are things you need to do. Don't worry about it. Even, even the head of Microsoft Security said she would rather people put their hand up, said I've made a mistake and work with them to resolve that state because that's part of the solution. If you work together to build a solution, that works even better, more powerful. She even said that that's what's important. That's where we. Because you cannot control every risk, it just doesn't happen.
Giles O'Halloran [00:53:49]:
It's not realistic. And criminals in particular always that bit ahead of whether it's policing, whether it's security advisors or whatever. So it is about training people on what the most likely threats are, what to do, how to do it, who to talk to, keeping those conversations open, opening roundtables, fostering a learning environment where people are sharing, learning and stuff, looking at case studies, exposing people to the problem. How would you solve it? Maybe do like an internal hack or hackathon around, okay, what would you and how would you solve it? There are lots of different ways we can use different learning frameworks that continually evolve our resilience because that's what it is, the ability to bounce back to these scenarios that mean we can build that as a culture. And actually wrote about that in on LinkedIn. There was one of my articles, one of my. It's an article, not a post, about a month ago on that. So if you want to have a look at it towards the end of the call, it's a.
Giles O'Halloran [00:54:38]:
Out there.
Adam Gray [00:54:41]:
Brilliant, Charles, Absolutely brilliant. So, thank you so much.
Tim Hughes [00:54:45]:
Thank you, Charles.
Giles O'Halloran [00:54:46]:
Thank you. I hope it's been okay. You put me.
Tim Hughes [00:54:49]:
No, it's been more than okay.
Adam Gray [00:54:50]:
Fantastic.
Giles O'Halloran [00:54:51]:
So.
Adam Gray [00:54:51]:
So where can people get in touch with you?
Giles O'Halloran [00:54:54]:
So you can find me on LinkedIn as both of both your chums, you know, so I'm out there as a friend of both of you, etc, so they can find me on LinkedIn or then go to my website, go to work, which is GoToWork.co.uk and just reach out, have a conversation. You know, I'm a strong believer that when it comes to networks and sharing, we should never collect. It's always connect, you know, come to me, reach out to me, drop me a line and say, you know, while you're interested in connecting and then we can carry on the conversation.
Adam Gray [00:55:23]:
Absolutely fantastic. So, Giles, brilliant. Thank you very much indeed.
Giles O'Halloran [00:55:27]:
Thanks, Giles.
Adam Gray [00:55:28]:
Everyone in the audience, if you would like to be a guest because you have something to say and some insights you would like to share, please join us by clicking the QR code here. So that's the. That's the end of. Thank you, Tim. That's the end of the. The digital download for today. Thank you, Tim. Thank you, Giles, of course.
Adam Gray [00:55:53]:
Thank you both and thank you to everybody in the audience. It's been a fantastic show and we really appreciate it and we hope to see you all again next week. Until then, goodbye.
Giles O'Halloran [00:56:04]:
Thanks, everyone.
#PeopleRisk #Cybersecurity #ArtificialIntelligence #RiskManagement #SocialSelling #DigitalSelling #SocialEnablement #LinkedInLive #Podcast