Taking an AI strategy from dream to reality

Michael Bird (00:10):
Hello, hello, hello, and welcome back to Technology Now, a weekly show from Hewlett Packard Enterprise, where we take what's happening in the world and explore how it's changing the way organizations are using technology. We are hosts, Michael Bird.

Aubrey Lovell (00:23):
And Aubrey Lovell. And in this episode we are looking at something rather practical, how to take an AI strategy from a dream to reality. We'll be asking a leading expert in AI, how his team go about implementing AI solutions for customers from drawing board to design and onto release. And we'll be asking what the thought process looks like. We'll be asking how to avoid common pitfalls when it comes to implementing AI, and we'll be asking what the rest of our organizations can learn from it.

Michael Bird (00:50):
So if you are the kind of person who needs to know why, what's going on in the world matters to your organization, that this podcast is for you. Oh, and if you haven't yet done so, make sure you subscribe on your podcast app of choice so you don't miss out. Right, Aubrey, let's get into it.

(01:08):
So building AI is a tricky business. In mid-December 2023, 85% of businesses in the UK reported they were not currently using artificial intelligence with 83% having no short-term plans to do so. Those figures are from the UK Office for National Statistics, which we've linked to in the show notes.

Aubrey Lovell (01:31):
At the highest level, most organizations understand the potential value of AI, but that doesn't necessarily filter down through the workforce. In fact, according to additional research from Grammarly, which we've linked to in the show notes, 89% of business leaders are at least familiar with GenAI, but only 68% of workers. Business AI literacy is clearly a hot topic and one we've obviously dipped into on the podcast before. And what that means is that when organizations do decide they want an AI solution, they need help and guidance designing it and getting the most out of it. So what does an effective AI building strategy look like and what questions should we all be asking, whether our organizations are AI users or not?

Michael Bird (02:13):
Well, that's where today's guest comes in. I recently had the chance to chat with Jimmy Whitaker, Chief Scientist of AI and Strategy at Hewlett Packard Enterprise. And if anyone knows how to answer these questions, it's him. So Jimmy, welcome to the show.

Jimmy Whitaker (02:28):
Thank you.

Michael Bird (02:28):
What exactly does a chief scientist of AI and strategy actually do?

Jimmy Whitaker (02:32):
Yes, it's an ominous title, and to be honest, it is a little bit chaotic depending on the day. So a lot of what I end up doing is working either with the customers or with the software teams and product managers. It involves a lot of different roles, but basically how do we build the right thing that's going to solve a lot of use cases on the software side that takes advantage of the hardware in the right ways and ultimately just provides value to the customer. So it's definitely on the jack of all trades side of things, touches a lot of different aspects, and I sometimes just refer to myself as a janitor. It just depends on what's going wrong and then I lean a little bit more into one area or another.

Michael Bird (03:09):
So what sort of things are you talking to customers about at the moment?

Jimmy Whitaker (03:12):
So generative AI, all the hype, it's definitely not a new thing. I mean, ever since ChatGPT came out, it's been the talk of the town all over the place. So definitely a lot of generative AI, people are asking how they can apply AI to their use cases, very general statement a lot of the time on what that actually means. But then we end up... At least a lot of the areas that we focus on right now is how do we apply a customer's data? How do we integrate that into an applications, whether it's a rag application retrieval, augmented generation or something for creating an internal chatbot, connecting to their data sources and making it easier to search those data sources by using generative AI. Code generation is another one. How do they create their own code assistants and those types of things. So it's a little bit all over the place, but those are some of the main areas that we're looking at.

Michael Bird (04:00):
You've already casually dropped a three-letter acronym I've not heard before, RAG. What does that... Explain to me quickly.

Jimmy Whitaker (04:05):
So RAG, it's retrieval augmented generation. It's a really fancy way of saying, basically I have my large language model. The large language model has been trained on a dataset and updating that, like training on newer data, newer data, newer data, that would be kind of too laborious to do. So instead what you do is you take your data, you essentially split it up. There's a variety of ways of doing that, but you split your data up and essentially make it searchable or retrievable.

(04:30):
For example, if I have an HR chatbot and I have my own internal HR policies, I have a bunch of documents. I can take those documents, split them up, put them into what's called a vector database, and then when I have a query about say, what's my PTO policy or something like that, the large language model then searches the vector database to find what pieces of data may answer that question. And then it takes all of that as context and then answers the question given all of the context it's received from this. So basically the whole idea is I'm going to retrieve something, I'm going to augment it in some way with an LLM, and then I'm going to generate a response based on your own company data.

Michael Bird (05:07):
Yeah, yeah. No, that makes sense. That makes sense. So how would you go about designing an effective AI or natural language processing system? Can you sort of break it into some nice steps and talk us through those steps one by one?

Jimmy Whitaker (05:19):
Sure, sure. So the area where I come from centers a lot around data pipelines. It's all kind of data in, data out at the end of the day. But how you're, I guess manipulating that data, how you're using it, where it's coming in and what techniques you're applying to it is the challenging part. And so typically an overall approach to, okay, I have a, say a generative AI application that I want to build. Let's just use the HR chatbot for example. Then essentially it's, okay, what data do I have available internally that's going to be useful for solving this problem? And then we want to say, okay, these are the types of data that I have. Here are the policies that we have, all these other things. Now how do we want to structure these? How do we chop them up? How do we need to pre-process them to make them most useful for say, retrieval and the RAG use case that we were discussing?

(06:04):
And so then we start building out some of the other components. We need to be able to deploy a large language model that can interact with these things. So it kind of steps from what data is available for our use case? What do we need to do to the data to make it usable with some of the applications that we already have? And then ultimately, how do we make that whole process easier? Can we make this a low-code, no-code experience, or is there going to be some development effort and some services that we need to apply for every single use case?

Michael Bird (06:31):
[inaudible 00:06:31] feel like organizations from an AI perspective know what they need slash want? Or do you sometimes have the occasional like, oh, we just want some AI. Is there still a bit of that or-

Jimmy Whitaker (06:41):
It's a mixed bag and so sometimes we-

Michael Bird (06:44):
You are getting the, we want some AI?

Jimmy Whitaker (06:46):
Yes, absolutely. Or we think AI can solve this problem. What do you think? And it kind of depends. Some people who are just very AI-heavy or they have a problem that they've needed to solve and they've been attempting to solve it for a long time, now they want to use a generative AI model to plug into the system they've already been designing for years. Using AI to solve a problem can be a reasonably straightforward way to solve some of that.

(07:08):
And it comes with all the considerations, okay, this means we need a lot of GPUs, we need some software to run on top of the GPUs that gives us what we need and so on and so forth. Those are pretty straightforward. But we do get some that are, we have all these different problems and we've heard that AI can solve all our problems, and so where do we start? And then that becomes more of a conversation and education process. Finding out what kind of problems they actually want to solve, what kind of data do they have, and then how do we partner up to be able to do this most effectively?

Michael Bird (07:35):
Because presumably once you get to the stage of like, okay, yeah, we figure out some sort of large language model can solve this issue, I guess there is then the question of what you going to run it on? How are you going to power it? Because presumably it's not... You can't just whack it on a server that's ticking over in the background-

Jimmy Whitaker (07:50):
Yes. Exactly.

Michael Bird (07:51):
[inaudible 00:07:51]. It takes a bit more than that, doesn't it?

Jimmy Whitaker (07:53):
It really does. And we're also seeing, depending on the industry, if it's in healthcare, finance, then the data privacy and security aspects are huge components of that. And so just throwing your data in the cloud or using some cloud service, not exactly a possibility. So we see quite often the people that are coming to us at HPE are very interested in how do I do this securely and looking at the long-term horizon of this as well. So for instance, I know that this is going to be very expensive to do in the cloud, so therefore I'm looking at buying the servers that are needed for this, installing the GPUs that are required for this and everything else. And then what software pairs on top of that to be able to make this all possible because I'm not getting all the nice tools that people have built on cloud platforms to make that possible. So yeah, it becomes an interesting conversation.

Michael Bird (08:40):
Is it starting to sort of reignite the private cloud, public cloud, on-premise, in the cloud debate? Is AI sort of fanning it a bit?

Jimmy Whitaker (08:49):
It really is, and it kind of comes down to cost and just value at the end of the day. And so, well, I have some opinions in here. People I think thought the cloud was going to make everything much, much cheaper because you can turn on and off things, but people just aren't really turning on and off things as much as maybe they thought they were-

Michael Bird (09:05):
People are busy.

Jimmy Whitaker (09:06):
Exactly, exactly. So we are seeing a lot of almost even hybrid cloud type things. And so I have my hardware that's on-premise that I'm using, and then when I max that out, I may burst into the cloud to be able to take advantage of that dynamic on-demand aspect of the cloud, which it's great for. But then we're also seeing, yeah, we want to know our base costs and be able to manage our own hardware. We know we're going to be able to need to upgrade it at some point, but then when's that going to happen? And yeah, if you're just completely relying on an external service, that's a little bit harder.

Michael Bird (09:35):
Yeah, yeah. Okay. What is the greatest challenge in implementing an effective AI or natural language processing system?

Jimmy Whitaker (09:44):
It's a hard question to answer. What is the most important thing? It depends on the scenario. Yeah. No one likes the, it depends answer, but it's definitely the most common. It's the easiest one to get. The component that I've found most important is realizing that you're not just going to solve your AI problem and then be done with it. It just never really works that way. The issue I usually see is setting yourself up for iteration and knowing you're not going to solve the problem. It's going to bring up other problems and you'll be able to solve more and more stuff. But it's just setting yourself up for this iteration and realizing it's going to be a life cycle.

(10:14):
You've worked really hard for maybe years to develop this whole piece of software to solve this problem. Once it touches real data, all bets are off. You never know how that's going to play out, and you're going to hit situations where somebody's asking you a question you never really considered before, and now you have to figure out how to incorporate that into the overall system. We often like to think, oh, I'm going to throw AI at this and then I'm done with it and profit. But in all honesty, you're going to continue to iterate. You're going to have to keep going through a process to get to something valuable.

Michael Bird (10:47):
I guess there is an element of this generative AI tool that everyone knows about hit the market, and I think what we are discovering now is it's significantly harder to get it right.

Jimmy Whitaker (10:57):
Yes. I think we're definitely seeing that with all the discussion of hallucinations or we're seeing a agentic strategy starting to come up, which is basically using the LLM and iterating through it and providing tools because there are all kinds of issues with it. For instance, current data, like adding search to it and all the other things that we were kind of discussing. I think people realize, oh, we have this huge AI model. It's amazing for a lot of things, but it's definitely far from perfect and the more people want to use it, it needs to be more and more perfect. And so I think it's exactly that. Now we're iterating towards what can we add on top of this to make it better, to make it solve the problems we want to solve. It can write poems for us, but that's only so useful.

Michael Bird (11:36):
How quickly is the technology emerging and evolving?

Jimmy Whitaker (11:39):
Yeah. So NLP really has kind of been subsumed almost into GenAI. For instance, a simple task of sentiment analysis is this a positive, negative or neutral statement, say about a stock or something like that. People would use that... Count the number of good words in a sentence, count the number of bad words, do some statistical analysis on it to be able to determine those things. And now you can give the statement to a GenAI model and say, predict one of these three things. In general, like an LLM can do a pretty good job at that task, pretty well indeed. So it's kind of an odd one. It's the GenAI space has really absorbed NLP. I guess LLMs are I guess a branch of NLP in and of itself. So it's been interesting. I would say a lot of the NLP stuff is heavily gone the GenAI and large language model route. There are still some developments, but they're very, very minor when it's compared to that LLM stuff.

Aubrey Lovell (12:34):
Thanks Michael and Jimmy, what a great interview. And we'll be back with you in a moment, so don't go anywhere.

(12:41):
Alrighty. Well, it's time for Today I Learned, the part of the show where we take a look at something happening in the world we think you should know about.

Michael Bird (12:48):
Yeah. So Aubrey, it's one from me this week. Now I know you're Technology Now's space correspondent, but I've nabbed this story this week. Because NASA has begun testing swarms of tiny swimming robots, which are hopes one day to deliver to the moons of Jupiter to look for life deep under the surface. The program run by NASA's Jet Propulsion Lab and called the Sensing With Independent Micro-swimmers program or simply SWIM, see what you did there. The program aims to deliver a payload of dozens of cell phone sized robots to the ice moons hundreds of millions of miles from earth.

(13:28):
The carrier pod would heat itself to melt the surface ice and travel tens of kilometers down until it hit liquid water, then disperse the drones, which will operate as a swarm to scan for life. The SWIM team's latest iteration is a 3D printed plastic robot that relies on cheap commercial motors and electronics. In early tests at a local swimming pool, the drones showed the ability to stay on and correct their course following a back and forth lawnmower exploration pattern. It's hoped that with some miniaturization and a lot of hours of simulation to get the best mix of battery life versus instruments, a future swarm would be able to explore 3 million cubic feet of water, the same as approximately 40 full sized swimming pools.

Aubrey Lovell (14:15):
Wow, that's really, really interesting. Thanks for that, Michael. Very cool.

Michael Bird (14:22):
Right. Well, now it's time to return to our guest, Jimmy Whitaker, to talk about AI design and implementation.

(14:27):
It feels like a bit of an obvious question, but we ask all of our guests this question every time. Why should organizations care about AI? Why should organizations care about NLP? Why should organizations care about figuring out the strategy around how to implement this stuff?

Jimmy Whitaker (14:41):
It's a very good question, and I could go a couple different routes. Probably the one that makes the most sense to me is these things are capable of providing incredible value and it really is going to be the next standard. I mean, in my opinion, you probably could have asked the same question 30-ish, 40-ish years ago about should you have an internet or strategy around the internet? And now if you asked a company that e-commerce or whatever else, it would be ludicrous to not have some type of a strategy there. So I think it really is going to be squarely in that realm moving forward. It's not a strong answer, but really just the overall value of it is huge when it's applied correctly, but it's kind of hard to see it exactly how applying it correctly is going to go, and there's going to be a lot of rough spots along the way, so.

Michael Bird (15:30):
So it's not a question of do you have an AI strategy, it will be a question of what is your AI strategy?

Jimmy Whitaker (15:36):
Yes, yes. And the lack of a strategy is a strategy in itself. It's like, AI strategy at that point is... Well, if you don't have an AI strategy, you're basically saying, "I don't think AI is going to be anything significant, and therefore it won't impact us." And that really opens up some risks. I think as far as industries go.

Michael Bird (15:56):
Because even if you're not interested as an organization, there will be external factors, suppliers, and-

Jimmy Whitaker (16:01):
Yes, absolutely. This is a simple one, but I've seen quite a few people that, okay, we have our SQL databases that are set up with all the data from our, I don't know, e-commerce platform or public information or anything like that. And a lot of people, the people that are asking the questions of how much of this product did we sell last week, don't know how to write SQL queries and all of this. And so combining large language models with some of the SQL large language models and all these types of things, you can ask those questions in plain text and get results back that are accurate and reliable to a certain degree. So you're almost putting the power of coding almost in novice coders or even somebody who's completely unaware of how to do any of that and make it a lot more powerful. But they have the intuition around why something is happening for a particular product or something.

Michael Bird (16:50):
Yeah, yeah. Jimmy, thank you so much.

Jimmy Whitaker (16:52):
Absolutely.

Aubrey Lovell (16:53):
Thanks so much, Michael, for bringing us that. It's been great to hear from Jimmy. And you can find more in the topics discussed in today's episode in the show notes.

Michael Bird (17:03):
Right. Well, we are getting towards the end of the show, which means it is time for This Week in History. A look at monumental events in the world of business and technology, which has changed our lives.

Aubrey Lovell (17:14):
Okay, so the clue last week was it's 1954, and this invention was a true gem. Do you have any ideas, Michael, on what it was?

Michael Bird (17:23):
Yeah, yeah. I think on the last episode, we both thought it'd be something to do with diamonds, so I'm going to say diamonds. What is the answer?

Aubrey Lovell (17:31):
Well, it was the creation of the first industrial process for creating lab grown... We guessed it, diamonds.

Michael Bird (17:40):
Ah.

Aubrey Lovell (17:40):
A team led by one Professor Hall created a special pressure vessel that could subject carbon compounds to pressures of up to 1.5 million pounds per square inch at temperatures of up to 5000°F. So that's 169,477 Newton-meters and 2760°C in metric terms.

Michael Bird (18:04):
Wow.

Aubrey Lovell (18:04):
The diamonds were tiny, as large as a grain of sand, but that was by design because it was a standard size used in diamond drill bits and saws. Artificial diamonds have been created before, notably earlier in the year by a Swiss scientist, but this was the first repeatable process that spawned a manufacturing revolution.

Michael Bird (18:23):
Very cool. Thank you, Aubrey. And we will be taking a short break for the holidays. But upon our return on January the 9th, the clue will be it's 1785, and this trip really elevated the travel experience.

Aubrey Lovell (18:38):
Interesting.

Michael Bird (18:39):
Well, that brings us to the end of Technology Now for this week. A huge thank you to our guest, Jimmy Whitaker, Chief Scientist of AI and Strategy at Hewlett Packard Enterprise. And you of course, thank you so much for joining us.

Aubrey Lovell (18:51):
Technology Now is hosted by Michael Bird and myself, Aubrey Lovell.

Michael Bird (18:55):
And this episode was produced by Sam Datta-Paulin and Alicia Kempson-Taylor with production support from Harry Morton, Zoe Anderson, Alison Paisley, and Alyssa Mitri.

Aubrey Lovell (19:03):
Our social editorial team is Rebecca Wissinger, Judy Ann Goldman, Katie Guarino. And our social media designers are Alejandra Garcia, Carlos Alberto Suarez, and Ambar Maldonado.

(19:15):
As mentioned before, we'll be taking a break for a few weeks over the holiday period, but we'll see you with new episodes on January 9th. Technology Now is a Lower Street production for Hewlett Packard Enterprise, and we'll see you next year. Cheers.

Michael Bird (19:28):
Cheers.

Hewlett Packard Enterprise