AI and Security - the opportunities and challenges

Aubrey Lovell (00:09):
Hey friends, and welcome back to Technology Now, a weekly show from Hewlett Packard Enterprise, where we take what's happening in the world and explore how it's changing the way organizations are using technology. We're your hosts, Aubrey Lovell.

Michael Bird (00:21):
And Michael Bird. Now, in this episode, we are taking a fresh look at how AI is affecting the world of cybersecurity. As we've explored on the podcast in previous episodes, artificial intelligence has opened up a whole new world of opportunities for our organizations, but it also brings fresh challenges for cybersecurity professionals. So that's what we are exploring today, the upsides of using AI in cyber risk management and the potential risks that come with it. So we'll be looking at the current state of play. We'll be asking whether AI as a tool to defend us can match AI as a weapon to attack us. And of course, we'll be discussing why this matters for our organizations.

Aubrey Lovell (01:03):
So if you are the kind of person who needs to know why what's going on in the world matters to your organization, this podcast is for you. And if you haven't yet, which we hope you have, subscribe to your podcast app of choice so you don't miss out. All right, Michael, are you excited?

Michael Bird (01:18):
Oh, very much so.

Aubrey Lovell (01:19):
Okay, let's get into it.

Michael Bird (01:24):
Let's do it.

(01:24):
Artificial intelligence is one of the fastest growing business fields in the world. A recent McKinsey report shows 72% of companies are now using at least one AI function in daily operations, up from just 20% seven years ago.

Aubrey Lovell (01:38):
But with this rapid growth comes a significant risk. Reports by the research firm, Statista, estimate cyber crime could cost organizations over $15 trillion by 2029. AI is playing an increasing part in that, being used both as a tool to crack passwords and as a tool to fool us. In 2024, Hong Kong police reported that a worker in a financial institution handed over $25 million to scammers as a result of a deep faked message from a colleague. This creates a conundrum, how do you leverage AI to drive efficiency and innovation without opening the door to security vulnerabilities? And beyond that, how can cybersecurity professionals use artificial intelligence to defend us while also guarding against AI driven threats?

Michael Bird (02:23):
Well, recently I met with Simon Leach, Director of the Cybersecurity Center of Excellence at HPE to find out more. So Simon, welcome to the show. What is the Cybersecurity Center of Excellence?

Simon Leach (02:37):
The Cybersecurity Center of Excellence, we are really there to do a number of things, business engagement, so helping the rest of the organization to understand the requirements to keep HPE secure. We also get involved in customer outreach. So a lot of the times, especially with some of the new regulations that are coming into force, our customers want to understand that they're actually doing business with a trusted party. So a lot of the time I work together with other teams internally to get that message across to our customers. And then my team is also responsible for some of the metrics and reporting that we provide back to our board, and some of the readiness work that we have for the regulations, particularly the new regulations coming into force in the EU over the next year or two.

Michael Bird (03:17):
So, let's talk AI and cybersecurity. So in your experience, how is AI affecting the world of cybersecurity? Is it for better or for worse?

Simon Leach (03:26):
Okay, so to use a very reused term, it's a double-edged sword, right? AI is completely changing the way we do business. It makes business a lot more effective. And of course that applies to the world of cybersecurity as well. If I can do something faster, then I want to do it faster. And AI really helps our cybersecurity professionals to get better insight into the sort of threats that they're experiencing, to be able to look at alerts that are coming into our security operation centers and provide context around them. But then the other side of that sword is how AI is being adopted by cyber criminals as well. The challenge we have here, and I'll take one example around phishing. So in the past we've always seen a particular type of phishing, and you may have seen this yourself. When you receive a phishing email pretending to be a large company, there will be spelling mistakes in it, there'll be grammatical errors, and there's actually a reason for that.

(04:20):
And the reason for that was that the people that are more likely to spot those grammatical errors are going to be less likely to click through on the link and give away money at the end of the phishing attack. And that always used to be part of cybersecurity awareness training, keep an eye out for spelling errors, for web links that don't work, all those kinds of things. But now because the cyber criminals are able to use AI, they can actually create very, very realistic looking phishing emails.

(04:48):
So as an example, I was playing around with some large language models and I asked the large language model to write me a phishing email in the style of a particular person. And because that person was quoted on the web in numerous places, I was able to provide a link to interviews that the person had done, and then the AI engine was able to use that as part of its context to create an email that sounded very much like that person had actually written it. Also, cyber criminals are starting to use AI to create more effective malware. So for example, using AI to create polymorphic malware, which basically changes every time it's executed on a system. So behavioral based systems have a hard time identifying that. We're also seeing cyber criminals using AI to create deep fakes. So there's a lot of ways that we're seeing these cyber criminals kind of take advantage of us.

Michael Bird (05:37):
Yeah. How much of a grasp do you think most people have on the effects of AI on their cybersecurity and data vulnerabilities? Is there enough awareness of it at the moment?

Simon Leach (05:46):
So I think it depends who you are relating that question to.

Michael Bird (05:49):
Okay.

Simon Leach (05:50):
So on the one hand, you've got your everyday user. And I kind of relate this to the way public cloud was adopted as well. The everyday user thought, okay, public cloud is there, therefore it must be secure, therefore I can use it to run my business applications. And exactly the same thing is happening with AI as well. All of the large language models that are out there are readily available for people to use, either for free or for a small monthly charge. And they assume that because it's there, they can use it and there's no real risk involved. But what we really need to make people aware of from an organizational perspective is that if your users are going to be using AI, make them aware of these risks. When we first saw generative AI becoming popular, a lot of organizations came out in public and they said, right, we are just going to ban gen AI for all of our users because it's too much risk involved.

(06:46):
But in the same way, if you tell a small child not to do something, it just makes them more interested in doing it. So banning AI is not a solution for organizations because they'll find a way to use it anyway. Even if I put security controls in place at the perimeter of my organization or a nice message in the browser that comes up to tell people they shouldn't be using AI or they should be careful with using it, they'll just go away and use a different platform. Instead, we need to start making people aware of those risks through a good governance process. So that people, as they use AI, they learn to use it responsibly. They understand the risk involved, for example, in putting confidential information into a public engine, and they understand the options they have to do it in a less risky way.

Michael Bird (07:30):
Yeah, okay. So you've been involved in both cybersecurity and digital risk management. When it comes to AI as a threat, do they need to be treated differently?

Simon Leach (07:41):
So the name of our organization internally is Cybersecurity and Digital Risk Management. And there's a reason that we've put those two teams effectively together, and that's because the two concepts are very closely connected. So cybersecurity is about putting controls in place to reduce the risk of a particular attack, or other. Digital risk management is understanding the risk that a particular activity has on the organization and defining a strategy to be able to protect against it. And there are different ways of dealing with risk, right? The very simplest way is you accept the risk, but very often that's a career-limiting move.

(08:22):
Secondly, you can mitigate the risk. So you can put a security control in place, but also you can avoid the risk by doing something different. Or you can transfer the risk, for example, by getting cybersecurity insurance in place. But it's important to look at that as the overall picture and understand all different aspects of the risky activity. And from an AI perspective, that's not just about the cybersecurity risk, right? That's also about the ethical risk of using AI. It's also about the privacy risk of using AI as well. So you need to look at it as a complete picture. And only once you've got that understanding of the risk doing a certain activity and involving AI introduces, can you start to talk about what cyber security controls should be put in place to mitigate that risk if necessary?

Aubrey Lovell (09:12):
Awesome. Some amazing insights in there from Simon. I can't wait to hear the rest of that conversation.

Michael Bird (09:19):
Right. Well, now it is time for, Today I Learned, the part of the show where we take a look at something happening in the world that we think you should know about. Aubrey, I think you've got one this week?

Aubrey Lovell (09:31):
I sure do Michael. Now you've probably heard about swarm robots, I think we've talked about that extensively before on the podcast. They're basically tiny drones that can operate in formation. And in fact, they're already being used to create amazing moving light shows at events around the world. But they're fairly limited in their abilities, especially when it comes to flight time. Until now, that is. Drum roll please-

Michael Bird (09:55):
Drrr...

Aubrey Lovell (09:55):
So researchers... That was way better than mine. Researchers in Massachusetts have developed a new type of robot insect that can fly a hundred times longer than previous models. These new 'bug bots' boast improved battery capacity, lighter materials and enhanced wing flexibility, allowing the robots to fly longer and more efficiently. These tiny drones are designed to mimic real insects and could one day swarm out of mechanical hives and pollinate crops at a rapid pace, increasing yields and leading to more food being produced. Alternatively, they could provide wide search and rescue coverage in emergencies. Now, despite the significant progress, challenges remain, such as replicating the sophisticated control of real insect wings. However, these advancements bring us closer to scaling up the technology, taking the idea of swarm robotics beyond the realms of research and novelty value in the near future.

Michael Bird (10:49):
Yeah, thank you for that, Aubrey. That was really, really interesting.

(10:55):
Okay, well now it is time to get back to my conversation with Simon Leach, Director of the Cyber Security Center of Excellence at HPE.

(11:04):
So gazing into your crystal ball, what do you see as the major emerging challenges and pressure points for organizations going into 2025 and beyond?

Simon Leach (11:14):
The cyber security threats and challenges that we experience, they don't change that regularly. So we still have big concerns around the amount of vulnerabilities that are out there, and we still have big concerns about the human factor in cyber security and training our users well. But I guess there have been a couple of changes over the last couple of years, which we're going to see playing much more of a role as we go into 2025. And that's obviously the increased adoption of AI, and moving away from just generative AI, but also looking at how organizations start to adopt AI within products they already use.

(11:48):
So for example, a particular cyber security vendor introduces an AI capability into their product. What risk does that introduce to us? So that's one area. And then the other big area that I'm experiencing a lot of discussions about at the moment is around regulations. So at the moment, we have kind of this tsunami of regulation, and it's not just us that has that problem. So it becomes a real challenge as a cybersecurity leader to understand how you focus on adopting regulation and how you adopt your own cybersecurity framework so that you can deal with the challenges that it's going to introduce.

Michael Bird (12:23):
Yeah. So I guess off the back of that, is there anything that's really exciting at the moment in the world of cybersecurity and risk management, particularly when it comes to AI?

Simon Leach (12:32):
I think what excites me most is the other ways that we can use AI. And I just want to preface that as well. A lot of people talk about how AI is going to replace jobs, and whilst that's certainly theoretically possible, the way I like to look at that is more from the perspective that your job won't be replaced by AI, but your job will be replaced by somebody who knows how to use AI, right?

Michael Bird (12:59):
Yeah.

Simon Leach (12:59):
And from a cybersecurity perspective, that's really important because we should all think about how we can embrace the capabilities that AI brings us. I mentioned in the Security Operations Center, using AI to enhance the incident responders capabilities by providing them more context, by helping them to write reports much more effectively. And we've also got other capabilities. Why wouldn't we use AI, for example, to help us write documentation? As long as we're weary of the potential for data loss by doing that and we provide the right context to the LLM and we ask the right questions, then we're not really introducing a risk to the organization.

(13:37):
I think the other area that's quite exciting there, and it's not specifically a cybersecurity one, but is the popularization of AI as an on-premise capability. And obviously HPE, we have a lot of infrastructure that's great for doing AI, but sometimes that's a little bit out of reach if we look at the very large platforms that we offer. But there are much easier ways to make an entrance into that space by using some of the open source LLM models that are out there and adapting them for your purposes internally. You don't need to spend hundreds of millions of dollars of training the AI model then, you've already got a trained AI model. And by using things like RAC and providing additional context to the results, we can take our questions and we can make them company specific with their responses.

Michael Bird (14:24):
All right, final question. A question we ask all of our guests, why should our organizations care about AI? Both their cybersecurity, but also that sort of risk management piece. Why should organizations care?

Simon Leach (14:37):
There are three aspects to AI that I tend to discuss when I'm speaking to customers about this, right? The first one is, how do we use AI securely? The second one is how do we build AI securely? And the third one is how do we use AI as a cybersecurity tool to protect our organization? And in the same way that when we started getting interested in supply chain security and software security a few years ago, we spoke about things like a secure software development lifecycle. Exactly the same thing applies for model development as well. The same thing as we talk about general governance, the same thing applies to AI governance. So it's important that whether we are talking about consuming AI or whether we're talking about building AI, that we make sure that the cybersecurity team is involved right from day one of that project. That's the main reason to be interested, right? It's understanding the business use cases, understanding the risk they introduce, and making sure that you're ahead of the curve when it comes to designing the right controls or the right approach to adopt it securely.

Michael Bird (15:39):
Amazing. Simon, thank you so much for joining us on this episode of Technology Now. It's been really, really great to chat.

Aubrey Lovell (15:46):
All right. Thanks so much for that, Michael. And thanks to Simon for joining us. You can find more on the topics discussed in today's episode in the show notes.

(15:53):
Well, we're getting towards the end of the show, which means it's time for This Week in History, a look at monumental events in the world of business and technology, which has changed our lives. Michael, what is life-changing today?

Michael Bird (16:09):
Well, the clue last week was, it's 1953, and this twist revolutionized science. Now, I think last week we both thought maybe... Or you thought I should say, something about DNA.

Aubrey Lovell (16:21):
Mm-hmm.

Michael Bird (16:22):
And well, I'm pleased to tell you it is the anniversary of James Watson and Francis Crick submitting their groundbreaking article on the structure of DNA to the journal Nature. Now they presented DNA as consisting of two helical chains, each coiled in a spiral. The key innovation was the discovery that the chains are held together by two kinds of so-called bases operating in pairs, which join in such a way as to ensure that the chains bond and work together. This pairing mechanism was vital for the stability and function of DNA, marking a significant milestone in the field of genetics. Their research won the 1962 Nobel Prize in physiology. Alongside work from Dr. Maurice Wilkins. The contribution of Dr. Rosalind Franklin was overlooked at the time, has since come to be recognized as well.

Aubrey Lovell (17:15):
It kind of gives a new context to the saying that's a really nice pair of jeans. Amazing story, Michael.

Michael Bird (17:22):
That was quality. Quality. I'll add it to my repertoire.

Aubrey Lovell (17:25):
And the clue for next week, it's 1930, and this celestial discovery caused a small stir. Huh? Any idea what that might be?

Michael Bird (17:34):
Small stir. Small stir.

Aubrey Lovell (17:35):
Small stir.

Michael Bird (17:36):
Maybe... Haley's Comet? No, that's the wrong era isn't it. A small stir? No, I don't know. Okay, I guess we'll find out next week. Any idea from you?

Aubrey Lovell (17:46):
I don't know. I think I'm just going to have to ride the wave on this one. See what happens next week.

Michael Bird (17:50):
Okay, you ride the wave. You ride the wave.

Aubrey Lovell (17:53):
All right, well that brings us to the end of Technology Now for this week. Thank you to our guest, Simon Leach, Director of the Cybersecurity Center of Excellence at HPE. And to you, our listeners, as always, thank you so much for joining us.

Michael Bird (18:04):
Technology Now is hosted by Aubrey Lovell and myself, Michael Bird. And this episode was produced by Sam Datta-Paulin and Lincoln Funda-Vesthason with production support from Harry Morton, Zoe Anderson, Alicia Kempson-Taylor, Alison Paisley, and Alyssa Mitri.

Aubrey Lovell (18:19):
Our social editorial team is Rebecca Wissinger, Judy Ann Goldman, Katie Guarino. Our social media designers are Alejandra Garcia and Embar Maldonado.

Michael Bird (18:27):
Technology Now is a Lower Street production for Hewlett Packard Enterprise. And we'll see you at the same time, the same place next week. Cheers.

Hewlett Packard Enterprise