MICHAEL BIRD
Hello and welcome back to Technology Now, a weekly show from Hewlett Packard Enterprise where we take what's happening in the world and explore how it's changing the way organizations are using technology.

We’re your hosts Michael Bird…

AUBREY LOVELL
and Aubrey Lovell, and this week we are digging into what could be the next step in artificial intelligence evolution: agentic AI.

- We’ll be looking at the current state of affairs in AI

- We’ll be exploring the issues with current AI models and how so called “agents” can help solve them

- And we’ll be asking what this means for each and every one of us

MICHAEL BIRD
Yes , so if you’re the kind of person who needs to know *why* what’s going on in the world, matters to your organisation, this podcast is for you.

(oh) And if you haven’t yet, subscribe to your podcast app of choice so you don’t miss out.

Right, let’s get into it!

AUBREY LOVELL
We live in a world full of artificial intelligence. We’re so accepting of its role in modern society that many of us have invited it into our homes in the form of smart tech and voice assistants. Remember how worried people were when voice assistants first came out and how that worry has pretty much completely vanished since we discovered how convenient they are?

[[Michael response]]
Yeah, yeah very much so. I speak to my voice assistant of choice on a daily basis because I use it to switch my lights on. Do you say please and thank you when you ask your voice assistant to say things?

I definitely do, because you never know. You want them to like you, right?

Yeah. You never know. I have exact same thing. I always say, because I've got children in my house, I sort of want to make sure I'm teaching them good manners. So that's why I do it as well. Please, please can you turn the lights on? Yes. Please can you play?

But here’s the thing… We’ve all noticed the issues with AI devices – sometimes you them a question and the answer seems to be completely unrelated to the original request. For something with “intelligence” in its name, it can be pretty stupid.

MICHAEL BIRD
And this is part of the current issues with AI. While it may appear intelligent to us, behind the scenes, the programme is just following a specific set of instructions – complicated instructions yes, but formulaic none the less.

This is where the concept of agents comes in…

AUBREY LOVELL
Secret agents????

MICHAEL BIRD
I wish.

These agents lend their name to a form of AI called “Agentic AI” which, in theory, should be more independent than the large language models that it’s built on.

AUBREY LOVELL
Well it certainly sounds like an interesting next step in the AI revolution and it’s incredibly important that we understand exactly how these models we have come to rely on for every day life actually work.

MICHAEL BIRD
Yes, exactly.

And here to tell us more about this agentic AI is Jimmy Whitaker, Chief Scientist in the AI Private Cloud Group at HPE.

INTERVIEW PT 1

MICHAEL BIRD
So before we get into the meat of the subject, can you just very quickly explain what an LLM is?

JIMMY WHITAKER
an LLM is a large language model that's been trained to understand, interpret, and generate human language and it kind of stems from language models that were really popular kind of throughout the 1950s, 1980s, when people were trying to originally apply like just probabilistic methods to human language. So given this handful of words, what's the most likely next word? And so this is how things like spell check were invented and other technologies like that.

So large language models kind of came in early 2000s. People began using neural networks, which is a specific type of machine learning algorithm and using them to train language models and then making them larger and larger and larger. When people started using deep learning, is just really, really big neural networks. But essentially the concepts stayed the same.

MICHAEL BIRD
And so how does a large language model differ from what most people would think of or call an AI?

JIMMY WHITAKER
. AI in general is kind of a broad term that encapsulates just anything that can simulate human behavior. I mean, it's just artificial intelligence. So I would say like NLP, so natural language processing, is a branch of AI that deals with human language. And LLMs are just a really powerful tool in that category. But there are other areas of AI, like for instance, computer vision or speech recognition, those types of things, that all kind of fall into the category of AI.

AI is kind of just somewhat become synonymous with deep learning or like deep neural networks at this point, since that's the most common method for just implementing some of these large models

MICHAEL BIRD
So how exactly does a LLM work?

JIMMY WHITAKER
Yeah, so there's a lot of math behind the LLMs and how they work and everything. But the main concept is pretty simple. When we start, these models know absolutely nothing about the world. When we train them, we just give them a sample of text. So for instance, it can be 100 words, 200 words, thousand words, But when we do that, we just essentially mask a handful of those words in the process and then say, predict what words should go into these slots.

So kind of like the the analogy questions on standardized tests, like cat is to meow, as dog is to question mark and then when you have data the size of the internet, you can do this a whole bunch. And after lots and lots and lots of training and lots of computation, then the model starts to essentially learn because you can say, oh, that was wrong. This is the right answer. initially the model's really bad at this, but given enough data and time and computation, it starts to learn the structure of kind of the underlying data. And in the case of large language models, in particular, if you're training on the internet, then there's a lot of data and there's a lot of structure it can potentially learn from that.

MICHAEL BIRD
So what are the biggest issues that we are facing around LLMs at the moment?

JIMMY WHITAKER
I think some of the biggest ones are providing the right information to the LLM to work with.

So it's intelligent, but you're sort of the beholden of the data that you put in.

Yeah, the model is 100 % learning a representation of the data or it's learning soif you're putting a lot of bad stuff in, then the bad stuff is going to come out.

Basically, one of the biggest challenges is keeping those LLMs relevant when you're trying to do something with them. It doesn't have a concept for yesterday because it was trained on data, know, up to, you know, sometime in 2023 or 2024 or something like that. It may still generate or hallucinate some outputs and say, this is what happened in the world yesterday, just because it's trying to answer my question.

I think hallucinations, to put it shortly, hallucinations are probably the main issue that LLMs are facing right now and usually the way that people are trying to approach that is adding more context question. And agents obviously come in there and a variety of other things.

MICHAEL BIRD
Alright, let's talk agents then. So, because we've heard that phrase a few times on the show, I don't think we've ever really talked about it in any great detail. What exactly is an agent when it comes to AI?

JIMMY WHITAKER
Basically using an LLM is all about improving the context and giving it really, really relevant information and specific questions and things that the LLM can then start generating from. The better the prompt, the better the result or the more it's gonna align with what you asked it. So for instance you could do a search grab some text and then put it into the LLM and it can generate an output. Agents really start to come in where you're trying to get the LLM to maybe automate some of these steps. And the way that you do that is you can essentially think of it as you're just looping with an LLM in the process. You're just calling an LLM multiple times And the first thing you may ask the LLM is plan your next steps.

This is my goal. You have things like these tools available and basically all an agent really is is you're taking an LLM kind of essentially using it as the brain of planning the next steps, reasoning about the output of previous steps and still trying to get you to a resolution. this basically it's trying to force LLMs to act more like humans if you will. So, plan the next steps. Here are some tools that you can potentially call. And then once those tool calls come back, feed it back into the LLM. We're going to loop over it again. Kind of iterating and allowing the LLM to kind of predict or like generate some next steps. They may be wrong, but it can generate some next steps and call some tools along with just generating a token by token or word by word response.

MICHAEL BIRD
So it sort of sounds like it's maybe rather than giving one very broad instruction, it's maybe chopping that into individual maybe tasks that ultimately will come to the, you know, the original question, it'll answer that original question, but it sort of breaks it into different tasks to hopefully get a better answer.

JIMMY WHITAKER
Yes, that's exactly it

So it may not solve it all in one fell swoop. it can run some tools, wait for the output of those tools, then hit the LLM again with all the previous context and the output. You kind of go through this process or this loop again and again to achieve a result.

MICHAEL BIRD
I guess it's like trying to make AI a bit more human in its decision making.

JIMMY WHITAKER
Yes, it's definitely, I mean, it's kind of a forced pattern, I guess, if you will. The agent isn't necessarily like learning or autonomous, at least not yet, on how it's making these decisions. Usually it's somewhat of a combination of here are the tools available to you, here's a prompt that outlines how exactly you should plan steps and how you should do these things, like generally speaking, and those types of things. So it really is like humans trying to force LLMs to act more like humans by manipulating some prompts and doing some fancy things. And so that's definitely what's driving the driving force kind of behind agents right now.

MICHAEL BIRD
does it make LLMs a little bit more trustworthy? Because to some extent, you're sort of a little bit putting it on rails on the places where it needs to be on rails, but then giving it free rein in the places where you want it to have free rein.

JIMMY WHITAKER
I think it's definitely making agents or making LLMs more capable. As far as trustworthy, I think you still have the potential for some of the just the fundamental problems of hallucinations and other types of things.

I've seen this before when I've used agents where it'll try one solution, that doesn't work. It'll go back to the original solution it tried. Okay, that didn't work. It'll try the same. It'll just kind of ping pong back and forth between trying the same things. If you don't put some time out, it'll try to do that indefinitely, if you will. So there's definitely some edge cases, but

Agents are definitely making LLMs and the use of them way, way more powerful. But trustworthiness is, I think, an interesting potential concern.

AUBREY LOVELL
Thanks so much Jimmy.

I think it’s quite fascinating because these agents sound like they’re making the AI more … well… intelligent. I think a lot of us imagine these chatbots and LLMs to be constantly learning so the fact that this is a separate step in the process is really quite interesting.

Looking forward to hearing more about them later in the show.

AUBREY LOVELL
Alright then. Now it’s time for “Today I learned”, the part of the show where we take a look at something happening in the world that we think you should know about. Michael, what have you got for us this week…

MICHAEL BIRD
Okay, so I am married. I wear a wedding ring.

I think mine might be platinum. I think it's platinum. It's shiny silvery color. Do you have a wedding ring?

AUBREY LOVELL
I do. I am also married. I have a rose gold that is mine with a Canadian source diamond.

MICHAEL BIRD
Well Aubrey, yours is a rose gold ring so let's just talk about gold because it's been known for a while that gold is formed in some of the largest explosions in the universe, neutron star collisions, but researchers in New York have decoded a two decade old signal and discovered a new way in which gold is formed in space.

So back in 2004, astronomers recorded two enormous bursts of energy from a type of star known as a magnetar. Sweet name. This is a neutron star, the remnant left behind after a quote unquote small supernova with an incredibly powerful magnetic field. The first flare lasted only a few seconds and released more energy than our sun does in one million years and was decoded pretty quickly.

However, the second smaller flare 10 minutes later has baffled scientists for two decades, but no longer. It turns out this second signal marked the creation of heavy elements which do, of course, include gold. As a quick note, if you are feeling left out, as I do, and have a platinum ring, don't worry because these explosions also create platinum, not just gold.

The researchers predict that this flare marked the creation of up to a third of the mass of the Earth in heavy metals and that this sort of flare-up could be the source of 10 % of all the heavy elements in the galaxy. Wow.

So I think what this is saying is those lovely little rings that you might be wearing, gold or platinum, they might come from a supernova?

AUBREY LOVELL
That sounds incredible and kind of awesome. I mean, I was getting worried for you a little bit, Michael, when we were talking about gold and not platinum, but you're definitely part of the Magnetar Club now, so we're cool.

MICHAEL BIRD
Yeah, I should get a badge

AUBREY LOVELL
All right then, well, let's bring ourselves back down to earth and return to Michael's interview with Jimmy Whitaker as we delve further into the world of agentic AI.

MICHAEL BIRD
Yeah, okay. Can you give some sort of real world examples of where you've seen agents being used?

JIMMY WHITAKER
Yeah, absolutely. So one of the areas that I think that we're seeing the most commonly applied is in software engineering.

But agents can be really varied. a second example that might be a little bit more tangible if I wanted to book a flight, so an agent is kind of required because there's multiple steps in a flight. I may give it a prompt to say, Hey, find me a flight to Scotland for these dates. And so there's kind of multiple steps that an agent would need to do. It would, it could maybe generate a list of cities that I can fly into in Scotland, but then it's going to need to look up flights and specific date ranges. It may need to use a calculator to compute or to figure out which, which flight is the least expensive during that duration.

It needs to reserve things. So there's kind of a variety of components that can come in here to plan some steps, call some tools to inform itself on what it needs to do in the next steps and so on and so forth. And ultimately get to a result of, right, I have a trip to Scotland and I'm excited about that

MICHAEL BIRD
So an LLM in and of itself wouldn't be able to do that because there's multiple steps involved, right? Because an LLM is just trying to predict the next word.

JIMMY WHITAKER
Yeah, exactly

MICHAEL BIRD
Can an LLM create an agent? Is there a prompt you can use to create an agent?

JIMMY WHITAKER
I think it could, like it essentially is generating what the prompt is for the next iteration, the next iteration, but overall there's still a controller that needs to run those tool calls.

MICHAEL BIRD
And it sounds like with some of these processes it's many agents taking on different processes.

JIMMY WHITAKER
I don't think it's there yet. I think it's getting there for sure. And so for instance, we're seeing, you know, specific agents, there's a protocol that came out called a model context protocol or MCP. And that's been a way to kind of unify how these tools are incorporated into LLMs for building agents. And then we're also seeing another protocol that is new called A2A, which is agent to agent protocol. And so we can definitely see the world trending in a direction of agents are absolutely where things are going. And there's an expectation that these agents will be interacting with each other. So there may be a flight booking agent and a flight searching agent, and then getting those two to interact would be, I guess, like the next step there in some of those scenarios.

MICHAEL BIRD
So what do you see as the future of agentic AI?

JIMMY WHITAKER
it's kind of, it's a little bit hard to tell. I mean, I definitely think we're gonna have many, agents. Agents are gonna be talking to each other.

Like there's gonna be, you know, speech to text models that will translate or basically communicate what you're trying to saty in text to these agents, then reason about a hand it back and even text to speech models to speak it back to you.

So I definitely think that that's a realistic thing, just kind of the home assistants on steroids a little bit. I think the amount of data and everything and how we make these models better. It does seem like some of these models are plateauing a little bit with just raw learning from data. It seems like we're becoming more data constrained rather than compute constrained right now is potentially where things are going.

It's definitely going to replace some jobs. I think it would be crazy to predict the opposite that it won't replace any, but it definitely will replace some. But ultimately, I think it's just going to require a new skill set, like industrial revolution required knowledge of processes and management and those types of things. I think this is going to be another thing.

MICHAEL BIRD
Final question then, why should our listeners care about agentic AI?

JIMMY WHITAKER
I mean, I think the main reason I think they should care is it's definitely going to be the future, I mean, it's kind of like computers and technology and... how that just became essential to education and everything like that. If you don't know how to use a computer, you're a little bit obsolete in the modern world for quite a large amount of available jobs and everything else, or even just... Basic functionality . I think the smartphones come a long way to make a lot of those things easier.

I think that's where it's going. I think most industries, there's some area where agentic AI may apply. I see it more as a way to improve what already exists rather than, you know, full on replacing or I'll add AI to my problem and it will solve it. I don't think that's really a thing that actually ever happens. so yeah, I think people should care because there's a lot of capability and applying that capability is going to be key for just where the world's going.

AUBREY LOVELL
Thanks so much Jimmy.

I feel like there’s a comment to made here about agents in AI and AI being given more agency but in all seriousness, it’s crucial that we understand how technology is changing so that we can all adapt and move with the times. If agentic AI is going to become as common as it sounds like it might, we need to be prepared to learn how to use it and adapt to the new working environment we end up in.

MICHAEL BIRD
Right, well Aubrey we are getting towards the end of show which means it is time for this week in history. Remind me of last week’s clue again?

AUBREY LOVELL
Yes, so, it’s 1980, and this microscopic killer is gone for good. And I think you had a few feelings about this one right?

MICHAEL BIRD
I think I thought it had something to do with vaccination and then I pretty much threw everything at the wall to see what stuck…

AUBREY LOVELL
You did do that – and you were half right!

I'm excited to tell you this is about vaccination and eradication. However, it's not polio, which you mentioned. This is actually about the announcement of the eradication of smallpox in 1980, around two and a half years after the last case of it was detected.

MICHAEL BIRD
Yes, now I remember this. Tell me the story, tell me the story.

AUBREY LOVELL
Okay, so just doing a little stretch here, putting my glasses on. This success followed 14 years of work after a vote at the world health assembly in 1966 provided a special budget dedicated to eradicating smallpox from the world. How they went about this is pretty fascinating because it wasn’t just a huge “vaccinate everyone” situation.

So here in the West, we have mass vaccination programmes but this simply isn’t possible in areas without proper health infrastructure. Where mass vaccination wasn’t possible, an approach called “surveillance and containment” was used.

Now, surveillance included house to house searches and rewards given for anyone reporting a smallpox case which was followed by containment – which was isolating cases and their contacts and a technique known as ring vaccination where only close contacts of a suspected case are vaccinated. This technique is still regularly used and a 2024 paper found that ring vaccination also significantly reduced the spread of ebola today

MICHAEL BIRD
Wow, gosh that's fascinating.

AUBREY LOVELL
Now I do need to give you a little credit for your guess last week as you mentioned polio. There are three strains of poliovirus which are very originally named Type 1, Type 2 and Type 3. Poliovirus types 2 and 3 have been eradicated – in 1999 and 2019 respectively – but type 1 is still endemic in two countries so we aren’t there just yet.

MICHAEL BIRD
Yeah, I think this is one of the most, like one of the biggest achievements of humanity eradicating viruses such as polio and smallpox. Like what an undertaking. Going house to house and basically giving people like people rewards. for pointing out cases. It's a really, really clever way of doing it and it's just amazing. It's such a cool success and it should definitely be celebrated.

AUBREY LOVELL
Definitely, and it should be remembered as well. I think it's important that we continue that education and make people understand the importance of that as we move into the future.

MICHAEL BIRD
Yeah, absolutely.

AUBREY LOVELL
All right, well, Michael, with that, I will turn it over to you. What's our clue for next week?

MICHAEL BIRD
Ok so, your clue for next week is...

Its 1990 and this first light has been seen without our pesky atmosphere getting in the way ...

AUBREY LOVELL
Ooh. I don't know what that could be. 1990.

MICHAEL BIRD
Okay I mean I think something to do with space has to be to do with space. Oooh, oh, oh, oh, it must be a space telescope it must be I'm gonna put my neck on the line and I'm gonna say the Hubble telescope.

AUBREY LOVELL
That’s a good guess. We will find out soon enough!

AUBREY LOVELL
Okay that brings us to the end of Technology Now for this week.

Thank you to Jimmy,

And of course, to our listeners.

Thank you so much for joining us.

If you’ve enjoyed this episode, please do let us know – rate and review us wherever you listen to episodes and if you want to get in contact with us, we now have an email address: technologynow@hpe.com, so give us a shout!

MICHAEL BIRD
Please do.

Technology Now is hosted by Aubrey Lovell and myself, Michael Bird
This episode was produced by Harry Lampert and Izzie Clarke with production support from Alysha Kempson-Taylor, Beckie Bird and Alissa Mitry.

AUBREY LOVELL
Our social editorial team is Rebecca Wissinger, Judy-Anne Goldman and Jacqueline Green and our social media designers are Alejandra Garcia, and Ambar Maldonado.

MICHAEL BIRD
Technology Now is a Fresh Air Production for Hewlett Packard Enterprise.

(and) we’ll see you next week. Cheers!

Hewlett Packard Enterprise