Unlocking Gen AI Potential in IT – Transcript
Anthony Snowball:
At the end of the day, clients have some real use cases that they’ve implemented. They solve really challenging problems with the use of Gen AI and other complementary technologies, and have generated very real benefits in the form of savings and capacity creations.
Announcer:
Welcome to The Hackett Group’s “Business Excelleration Podcast.” Week after week, you’ll hear from top experts on how to achieve Digital World Class® performance.
Anthony Snowball:
Hello and welcome to The Hackett Group’s “Business Excelleration Podcast.” I’m your host Anthony Snowball. I lead Hackett’s AI go-to-market, as well as our AI IP innovation. Today, we are talking about really the five steps that you can utilize to accelerate your journey using Gen AI technologies and other related AI technologies. I’m proud to be joined by my esteemed colleagues, Kyle Robichaud and also Joe Nathan, who are experts and senior leaders within our IT practice.
Rather than waiting, why don’t we just jump straight in gentlemen and start the conversation with really a question that a lot of clients have been asking. We’ve been hearing a tremendous amount about the Gen AI solutions in terms of the marketplace potential. What are your perspectives on the power of Gen AI and the types of benefits that clients can realize?
Joe Nathan:
So there’s a big buzz around AI on both the supply and demand side. So when I say the supply side, this is coming from the AI, different vendors, product vendors and also specific AI vendors. So there’s a lot of platforms, and it’s also become like every platform vendor has some kind of AI capability weaved into their platform. And it’s also some are generally AI. Some are also in the border of non-AI as well. And the demand side, there’s a lot of interest generated from consumers of AI – both enterprises and consumers. So everyone wants to use AI. There are a lot of success stories that we hear about from AI solutions creating either some kind of productivity or some kind of benefit to companies. And Hackett recently commissioned a study on Gen AI. And we estimated about 44% capacity creation. And when I say 44% capacity creation, either this capacity can be used to increase throughput or productivity, or some companies choose to look at it from a cost reduction standpoint as well.
Beyond capacity creation, so we also are looking at breakthrough thinking and we call it exponential AI benefit. And this relates to more around, say, building new revenue opportunities. A few examples can be around companies that are in drug manufacturing or drug testing. So you always have to walk through these multiple years of testing. So now with AI coming in, there’s so much of testing that you can automate through AI, and also the drug development process is a lot more faster now, and this is just one example in life sciences. But in every industry you look at, there’s so many examples of AI where it certainly changes the paradigm of how you can use it to develop new products.
Anthony Snowball:
What you described is that there’s a really strategic opportunity for clients around – call it – productivity and capacity creation, and how they deploy that capacity, which is exciting. And as you start to think about the net winners in this game to inspire our listeners, where and how are clients really taking advantage of this breakout technology, and how are they deploying it and specifically what benefits are they realizing, that’s the big questions these days. Everyone’s jumping into POCs, but what are those benefits that you’re seeing?
Joe Nathan:
Yeah, first I’ll start off with some examples, right? So in our field of information technology, we see AI being used in numerous applications. So I’ll probably give a few examples – talk about what’s the benefit as well. So a big example is around service desk ticket resolution. So traditionally you look at when a user has a question or has a trouble ticket, it gets channeled to an individual and that person starts resolving those tickets. One, is there’s a capacity crunch where we are aware the person has to wait for the test engineer or the engineer to come back and say, what, how do you resolve the ticket? Now with AI coming in, there are numerous tickets which can be resolved through AI. AI is trained to learn and address those tickets. So first is it makes the process a lot more easy. There’s no waiting for the end user. So as soon as they have a problem, AI looks at it, kind of heals it and fixes it for you. So that solves it.
And second is also from an IT standpoint, you can use the other capacity –whatever capacity is remaining – to do other higher value-add work. And there are numerous platforms that do it as well. It’s not just you had to build it up from scratch – ground up. Many of the dominant services platforms have this capability inherently built in, so it’s a lot more easier.
Another example I would give is around testing. So testing, again, testing has also been an evolution. So it used to be when developers developing new platforms, it always gets tested by human testers. And slowly we came to a world where it was automated, but it was more on process automation. So process, you can take a testing process and you automate it.
Now you have AI kind of embedded into testing platforms, and it creates a new paradigm. So you look at it as AI also generates test cases. It looks at your code, looks at your functional stories, or functional use cases and creates test cases. And second also, even the testing part, many test cases, it generates it, even the time it takes to generate those test cases or to program the system to do the testing, AI does it a lot faster. So that’s a big benefit or productivity boon for the IT team in this.
The third example I would give is in development. So development, there are numerous cases where AI can help out. So one example is around, you probably hear a lot of around if I have a specific use case or specific requirement, I just feed this question to ChatGPT and provides me a result, and it gets me the code.
So there are some benefits of it, but at the same time there’s also a challenge where if you don’t know how to use it, you run into an issue where you fit in, get in a code which does not align, or which is not suitable for your environment. So you’ve got to make sure that you have the right code. What is the right parameters that get in so you can have the right code? If you know how to use it, it’s going to make a lot more easier for it from a development standpoint.
Anthony Snowball:
These are great examples, right? At the end of the day, clients have some real use cases that they’ve implemented, solved really challenging problems with the use of Gen AI and other complementary technologies, and have generated very real benefits in the form of savings and capacity creation. So terrific examples and thanks for walking through that. And I start really pondering where are CIOs getting the highest impact returns? You shared super examples related to service desk, as well as to the code generation – code testing. So phenomenal examples.
And these are what I would call success stories. But everyone likes to, when they drive down the highway and they see a big crash, they almost want to see it to understand what they can do to avoid it themselves. And oftentimes there have been some AI uses that have gone wrong or didn’t realize the results, and we should learn from those as well. And so I don’t know if there are challenges maybe Kyle that we should consider that prevent IT organizations from unlocking the power of Gen AI, and what we should all be watching out for as we’re driving down the road and deploying Gen AI.
Kyle Robichaud:
Well, Anthony and I appreciate that. At the end of the day, we still see the typical technical legal ethical challenges that come around AI, but interestingly enough, really the No. 1 challenge that we’re seeing – and we’re talking with clients all the time – is an age-old problem that companies have had, and it’s really around their data quality. When it comes to developing Gen AI solutions, companies that try to use their own data are realizing again and again the importance of all aspects of data quality, the timeliness, the accuracy, the ownership of the data all come into play. And I’ll tell you, we were working with a client currently that is having this exact issue where they’ve got multiple HCM systems around the globe, and they just struggle to have a view of their entire employee landscape. And when we were talking and trying to figure out what are the potential AI solutions that can help to really work with their people, it becomes a real challenge.
So it goes down to the fundamentals of having your data in order with good master data management strategies and good data governance around the organization and tying that all together. Another area that is still challenging for companies from their technology landscape is around their legacy applications that they have. It becomes a lot more difficult. With legacy applications, obviously there’s not going to be embedded AI capabilities that are being those tools. So, for example, if they haven’t upgraded to the latest versions of ServiceNow or SAP or Salesforce, they’re not going to be able to leverage those AI capabilities that these companies are putting on top.
And at the same time, if they’re trying to take advantage of the AI capabilities with their legacy platforms, they’re going to have to deal with creating complexity with additional integrations that they’re going to have to manage over time within their environment.
And I think lastly, just to mention the whole security concern here, obviously there’s security concerns with corporate data getting out there once an LLM has some of that information. But one of the real concerns is around the closed models that are out there on the market where companies don’t understand how decisions are being made. And when they can’t have the ability to backtrack and understand, it becomes a liability for them. So it does slow them down.
Joe Nathan:
I would also add is training risk because oftentimes I’ve seen clients use AI, but they’re not effectively trained on using those AI solutions. You may have all the right processes, and you may have the right tools or technologies, or you may even have the right data, but if you don’t know how to use these tools the right way, it can lead to chaos as well or a suboptimal process. You may actually be doing more harm than creating benefit. To give an example, I think I mentioned it earlier as well regarding when you’re using AI to develop new code, you need to know how do you use it so that way you’re not creating a suboptimal code for your organization or even a suboptimal application. It has to align – it needs to know what frameworks to use or what databases it has to refer to. So many clients are now training the people on prompt engineering, and they feel that that’s the next level skill set – that that’s a mandatory training now for many developers in AI. So training is going to be a part of the aspect as well.
Anthony Snowball:
Yeah, it sounds like there are a lot of lessons to be learned here around data, IP protection, ethics – all the way through to security. And even as you raised, Joe, some of the training aspects because this is a modern workforce evolution where you are now starting to introduce new capabilities and your teams need to know how to use those, and you shared the prompt engineering and also similarly a lot of the popular products that are on the market now from Gemini to Claude to Copilot, there is a lot of power now in the hands of the end user. And so very clearly that is a big point to emphasize. So thanks for raising that.
As we start to evaluate this, what sounds to be an exciting opportunity for benefit realization from Gen AI with some thoughtful planning and pragmatic approach, there has to be a process I would assume that the two of you would recommend. Is there a secret sauce – a smart intelligent way – that clients should be thinking about how they can quickly get moving on capitalizing on the Gen AI benefits and even the related AI technologies? I would love your perspective on that and what you’re seeing with your clients.
Joe Nathan:
That’s a great question, Anthony. And this is something which every client that we speak with, they actually have a challenge and sometimes they are doing it very suboptimal. So Hackett has developed a five-step approach on how do we enable IT executives – and not just executives but even IT practitioners – use AI responsibly, and at the same time get value from it as well? So the first step is around educate and informing your executive and your practitioner team and set those expectations around Gen AI. So right now we hear a lot of stories around AI and AI solving situations, and everyone feels that AI can, if I use a Gen AI solution, it’s going to help me solve the problem I have. But clients also need to know what can AI do, what it cannot do, and how and when to use AI? That’s going to be the first step. You need to know before you use a platform – before you understand how to – you need to know what it can do for you.
The second step we look at is understand where the greatest opportunities exist for leveraging Gen AI across the function. So while you look at Gen AI capabilities, you also need to know what’s the current challenge that I’m trying to avoid? Because sometimes you may be having a perfectly smooth process and you can use AI, but it’s not going to give you much benefit. So you need to know when or where to use Gen AI capabilities as well. The third step is examine specific use cases, and based on the opportunities I identify to understand how Gen AI could impact your work. So here again, so you want to make sure that what’s the benefit that AI can provide and have a good approach to developing the business case around AI, knowing what’s the value that AI can do and so you can accordingly use it.
The fourth step is determining your organization’s readiness to realize the benefits of Gen AI. So you need to look at what’s the data quality, whether you have the right skill sets because you can get the AI tool developed or implemented in the organization, but if you don’t know how to operate – how to support it – it’s going to be a challenge for you. You need to have the right governance. You need to have the right infrastructure so that way you can still get the value from the tool.
And the last step that we talk about is developing a prioritized Gen AI road map because you also don’t want to have a situation where you have 20 different Gen AI projects all going on, so you don’t have, one, it’s going to cost you a lot of money. Second is you’re going to be getting to a situation where you’re trying to run day one and not learn or kind of crawl, and then you’re not going to have a good evolution. So you want to make sure that you have the right prioritized Gen AI capabilities, you know what’s going to be the best scenario for you, have a road map and try to work according to the road map.
Anthony Snowball:
If they take a thoughtful approach, as you said, educating and identifying the greatest opportunities and starting with where’s the best return, that they’re already automatically going into the right areas, that they’re going to see some form of benefit that is contributing to their organization’s productivity. And so as a result would then lead into a natural sort of use case selection process, readiness, fit, and then road map. A lot of that makes a lot of sense, and I know we see a lot of clients doing that.
As you start to think though more broadly about moving forward with a program like that – the clear five-step process – in my interactions with clients, I hear a sense, a combination of, I hear about Gen AI all day long and in every single corner of the research universe, my board is being infiltrated with messaging from investors. My C-suite is hearing from the board, how do I sort of work my way through and not only sift what’s truly possible with the technology, but more importantly, how should I set expectations inside my organization? And a lot of the CIOs and C-suite that I speak with, they are very much centered on really trying to thoughtfully design an approach that allows them to take a measure twice, cut once sort of thinking, and they want to be pragmatic in that. But what are the two of you seeing as it relates to managing expectations and not allowing this massive hype machine to infiltrate your organization and lead to some risky decisions? What do you recommend for those clients?
Kyle Robichaud:
So Joe, I’ll start with that one. And really when it comes down to it, they really should start with having a transparent approach, and they need to make sure they understand the limits that their technology and the organization have today before they get started. But where they start, they should start with setting some clear objectives and understand what those objectives are for the business, right? That could be efficiency, that could be innovation, it could be developing new value streams for the business, but really understanding that will help guide the strategy that they should put in place.
And when they go about trying to create that strategy, they need to make sure that it’s realistic. They need to understand their current state – where is their data readiness? Is their technology infrastructure ready today? Do they have the employees with the right skill sets? That becomes really important. And make sure you start small, right? Deliver some quick wins back to the business to make sure that you’re iterating and understanding and building confidence within the business that you’re going to be able to help set that stage for a broader deployment of AI capabilities in the future.
It’s something that you should be reviewing on a regular basis to know whether you’re going in the right direction or you’re not. And constantly double-check and understand the capabilities and the limits that your organization has so that way you’re not trying to overreach. It’s an interesting topic because I’ve had different conversations with different leadership at multiple companies. And one CEO I was talking with, his strategy is to start small and invest a little bit, but really it’s more of a wait and see. They want to be the fast followers in the AI space because they understand how fast-paced this is moving. And part of that strategy is they don’t want to put a whole bunch of investments for their business. This isn’t for every business, but for their business into technologies that they might be able to kind of leapfrog into the next version of Gen AI.
If you think about how now we’ve got all these multimodal-type solutions that are starting to come out that weren’t available just a short time ago, so some businesses are taking a more cautious approach and understand what’s out there where others – they’re trying to use this as a way to drive value streams within their company. So they’re developing their COEs, and their strategy is to more heavily invest in how they can outpace the competition using the capabilities of Gen AI.
Anthony Snowball:
Those are great points. I think that is a thoughtful approach, right? You have those that are watching and learning, but developing a plan. You have those that you said that are experimenting and starting to dabble with POCs, and those that are really mature with more of a ideally digitally native model or somewhere in between – a COE that’s helping them appropriately determine where to deploy AI and for what benefit. And you hear, at least I do in the interactions that I’m having, and I know this group has talked to now over 300 plus senior executives in C-suite, and really everyone is really in the process of defining their plan or experimenting with the technology or further down the path than that. But you hear so much currently around the emphasis on use case, and I’m using a use case here, or I’ve heard a good one at a conference or a vendor came in and they’ve automated a particular use case.
And I can’t help but feel that that approach seems pretty tactical to me, where you’re picking and choosing based on where the hype exists around use cases or where you have willing executives inside a company that are willing to engage in experimenting with the technology, where as you discussed Joe up front, it felt more like a top down. I think a lot of lessons to learn as we talk through Gen AI in terms of not only case study possibilities around the art of the possible and the returns you can realize, being cautious to not be overly focused on the use cases and take more of a top-down approach and mindset to defining where the best returns exist for AI. And you each have shared even lessons learned along the way and helped guide our listeners here on how to prepare themselves with a quick, simple five-step process.
One thing we haven’t talked about, and really in my observation, this has a lot to do with evaluating the return associated with AI because that does differ from client to client, and it has everything to do with their technology environment, their architecture, their data, and really the risk involved. And I know earlier we spoke about some of the concerns and risks around data and ethics. So as we’re gauging in thinking about organization readiness, what are some of those important considerations as you start to home in on a playbook for yourselves of the top and best use cases that make sense for your organization will deliver the greatest return? What should they be considering when they’re gauging readiness?
Joe Nathan:
This is a great question and also a very important aspect for enterprises and me, because I think from a strategy standpoint, companies are clear that yes, AI is the future and I need to implement Gen AI. So they look at implementing, getting the right policies, right structure, working with the right partners and all of that good things.
Now we’ve got to make it real. This is where the strategy, we got to make the strategy real, right? And companies are slowly learning and evolving themselves and finding out what are the tactical things that they got to do to make themselves true Gen AI ready. So a few examples I can give based on my experience working with some clients and also how Hackett kind of has recommended to some of our clients. The first one is around data quality and availability. So there are numerous examples where there are a lot of promising Gen AI platforms that provide great solutions or make your life a lot easier, but there is an underlying facet that and underlying situation where you’ve got to make sure that data is readily available and the quality is good, the foundational aspects of data has been met before you start putting in – slapping the Gen AI platform in. So that’s an important aspect.
Another example I would give is around having good data governance policies. And oftentimes this probably takes a real long time for clients as well. So because sometimes when you’re using Gen AI platforms or if you’re using open Gen AI platforms, you want to make sure what can or what cannot get, what data of yours can be in the open and what cannot be in the open. So you want to have a really good data classification model and not just a model, but you need to also make sure that this is also well-adopted and there’s good compliance in your [inaudible 00:22:44] organization on the data classification.
The other aspect – this is really a hard one – around ethical concerns. So there’s a lot of ethical concerns. You may have seen a lot of things around and the things happening in the media around the biases or the path where AI provides you a result, which is not your ethical liking.
Your engineers or your operators are fully trained on how to use it and how do you not do some situations where, how do you look out for those biases and make sure you can work and work against it. So the other example is around training. So when I say training, right, so this is both from technology support, as well as how to use AI solutions. So that’s also going to be an important aspect of your workforce. So these are some of the areas. So this is basically what we have been, we say that tactically companies need to look around and make sure that you are ready to use these AI platforms.
Anthony Snowball:
Sounds like there are a lot of different protective steps that are necessary. And I think for those clients that are just looking to get started, I know you said there were five discrete steps up before, but in a very narrow sense, what’s the best way? What’s the first step that they should really consider in developing a prioritized road map? What would you recommend?
Kyle Robichaud:
So I’ll say in this space, this shouldn’t be too far off from how they go about their everyday enterprise portfolio management practices to begin with, right? Putting together all these use cases as a prioritized portfolio really helps because you’ll be measuring the anticipated value, the effort that it’s going to take to implement them and put them into a prioritized sequence that you can revisit on a regular basis. You could look at it on a monthly, quarterly, semi-annually, annual basis to make sure that you’re constantly going after the best use cases for the organization. New ones are going to come in constantly. So having that process where you’re able to review them against your existing portfolio of ones that you’re executing will really help to make sure that you’re constantly providing value back to the organization. So instead of trying to just address every use case that somebody comes up with, use that standard process, and it’ll definitely provide some organization, some prioritization and a way to manage the investments that you’re making in this new technology.
Anthony Snowball:
So it gives people a great chance to start. And I’d like to first thank each of you – Kyle and Joe – for joining me today on this podcast. And for our listeners, I’d like to thank them for joining us and would encourage them to stay tuned for more insights on how they can unlock the power of Gen AI. Thanks to everyone for joining us. Have a great day.
Announcer:
Thanks for listening. If you liked this episode, please share it. You can find show notes, transcripts and related research at www.thehackettgroup.com/podcast. Subscribe on your favorite listening app so you never miss an episode. We welcome your feedback by rating this or any episode, or send us an email at podcast@thehackettgroup.com. The Hackett Group is a strategic consulting and executive advisory firm. Learn how we can architect your digital transformation journey, including Gen AI, at www.thehackettgroup.com.