Generative AI and Its Impact on SG&A – Transcript
Vin Kumar:
Don’t expect that you’re going to put a Gen AI solution today and expect 42% saving coming today. It’s not going to happen. It’s a journey that you’re going to embark on, which is fundamentally going to change your way you manage your process and manage your function, with the objective of it becoming more and more autonomous. It’s not a one-year journey. It could be anywhere from five to seven years.
Announcer:
Welcome to The Hackett Group’s “Business Excelleration Podcast®.” Week after week, you’ll hear from top experts on how to avoid obstacles, manage detours and celebrate milestones on the journey to Digital World Class® performance.
Joe Nathan:
So we’ve heard a lot about Gen AI. It has a lot of hype, and not just in business, but across business and commercial and enterprises and users. It’s getting a lot of hype and a lot of attention these days, and everyone’s investing in this technology. So Hackett recently conducted a study where we estimated what’s the savings that companies can get by using Gen AI in their SG&A processes. But before we go into the study and our methodology around the study – the savings – I want to understand how does Hackett define Gen AI, and what’s Hackett’s AI framework?
Vin Kumar:
I think that’s a great, great point to start because I think there is, as you said, there is a lot of press on it – a lot of talking and conversations going on it. And what we see when we talk to clients, there is a lot of unpacking we’ve got to do and resetting what is Gen AI? What is not Gen AI? How should you think about it? So it’s a great place to start to frame Gen AI. So as all of us know, AI has been here for a while to do this. So this is not something new. But what AI is fundamentally trying to do is it’s trying to emulate or simulate human intelligence AND how we think and how we act. And that’s kind of what AI is trying to do. And the best way to think about the AI that has been here prior to Gen AI that we now call cognitive AI is focused on our left-brain activities.
As you know humans, we have a left brain, right brain, and we use these very different for different actions and different purposes – that we use one more than the other. So a left brain works on activities that are much more analytical, more deterministic. And so when we look at cognitive AI – that’s again AI that was prior to Gen AI – it works very similar. It requires a set of instructions. We [call] it programmatic AI where you gave it a set of instructions and the machine would go and execute it at incredible speeds. It could be very complex, using a lot of data. It would take a lot of time for us as humans to do it. It could do it fast. Then slowly that AI continued to evolve and focused a lot on what we call algorithmic AI, where the AI was able to, based on a certain pattern, was able to come up with its own algorithm.
It would read that pattern, come up with its own algorithm or in another way to put it, come with its own formula. So the next time you give it the variables, it would use that formula that it calculated or it came up with, and calculate what the outcome should be. But again, if you look at it fundamentally, it’s a set of instructions that the models are being able to execute and deliver an outcome. And one of the fundamental requirements for that was what we call structured data. That means data clearly arranged in rows and columns and have attributes clearly classified. It’s aggregated, normalized, cleansed, and it needed to be done – cleanse the data for it to be operating and perform to its best. That’s what we call cognitive AI.
Generative AI, it’s a type of AI that focuses a lot on or tries to mimic a right-brain activity where it handles uncertainty much better. It handles kind of unstructured data much better. So if you’re looking at making what you and I would call – based on our experience, based on what we see – we make certain gut decisions. Or we, based on our experience, we made the decision not exact from a very deterministic perspective. That’s kind of what Gen AI is trying to do. So it’s very much what we call probabilistic. It’s based on and in uncertain decisions, which we do is what it’s trying to best replicate. So it does that very well. And also what it does really well is able to generate content. So again, it’s able to programmatically generate content, understands context, and it works really, really well on unstructured data.
So that’s data based on what’s in documents – be it in an audio file or images or a music score sheet that you have. It’s able to use all of this unstructured data, and it’s able to extract information, make inferences, and generate new content. That’s what we call unstructured data. So that’s fundamentally how we define Gen AI – is it is cognitive AI, which is similar to left-brain activities that humans do, and generative AI, which is similar to the right-brain activities view? That’s the biggest definition, which we want to define and help clients understand this and bring this a little more context – provide more context to that.
Joe Nathan:
Awesome, thank you. I think the way you can distinguish traditional AI from Gen AI and the capabilities that Gen AI can do, I think that certainly is a great detail. So we also hear a lot of Gen AI applications in core business functions like say product design and operations management, supply chain logistics – all of that’s happening. And I know that the Hackett study focused specifically on SG&A functions. So before we go into the study details, I’d like to know what are some of the use cases from the Gen AI perspective, which are applicable in SG&A areas?
Vin Kumar:
Sure. So the way we advise clients to think about how should you adopt Gen AI, we said there are three fundamental ways on which you can adopt Gen AI – be it in your operations, be it in your product and business side, or be it in SG&A functions. The first is what we call embedded Gen AI. So this is you currently use multiple solutions and services that you’ve got, be it in your financial systems. You may be using an Oracle or SAP or a Microsoft Dynamics ERP. You may be using something like Workday or Oracle or SAP on your human capital management or in other kind of processes, functions, too. You’re using these systems. Now, they are going to be enabled by Gen AI capabilities. So these solution providers are going to be enabling Gen AI capability within those soft solutions that you use today.
And this is what we call embedded. So there’s nothing much the enterprise needs to do. You don’t have to put a new infrastructure on any of them. You work with your solution providers if you like some of those features, and obviously some of them are going to be at a premium, and if it makes business value sense for you as an enterprise, you turn that feature on, it’s available and you use that. So there is nothing from a functional infrastructure perspective that an enterprise needs to do anything. So it’s all available in these, and there are multiple solutions like this – like the most recent Microsoft’s Office 365 has got the Copilot, which is again available that if you want to turn it on and use it. Be it ServiceNow in their Vancouver edition have Gen AI capability. Oracle, SAP, Microsoft Dynamics – all of them have got a release schedule for the Gen AI capability that they’re going to provide.
So you as an enterprise need to understand what those functionalities, what the solutions you use today and what that road map is. If it’s a value, you turn it on. So that’s one way of using Gen AI in enterprises. The second way is to use what we call native Gen AI solutions. Here, enterprises need to invest in an enterprise Gen AI solution – be it a Microsoft Azure OpenAI services or IBM’s watsonx AI solution or Amazon’s Bedrock solution. You invest in that solution that provides a platform, and using that platform, you start developing AI solutions for you. And you should think about this truly as a platform because how you’re going to do this is identify a use case that you want to use Gen AI, and then you use this platform to build a solution for that particular use case, and then you deploy it to the users who are going to be using it.
So it’s not one Gen AI kind of a big solution that supports all of them. It’s more a platform using which you build solutions for each use case, and there are multiple types of Gen AI solutions that you could use to do that. The third category is what we call domain-specialized Gen AI solutions. Here you are going to see more and more solutions coming up in the marketplace, which are Gen AI solutions, which are trained on a particular domain such as legal or marketing or customer service or tax processes or regulatory filings or processing for onboarding or talent acquisition processes. So they will be for each of these domains, a specialized solution that’s going to be available that you as an enterprise would subscribe to it, fine-tune it, and use it in your own enterprise. The reason we think, especially in the SG&A side where we feel that companies are not going to build their own domain-specialized model Gen AI solutions, is because to do and build a domain-specialized solution, you’re required three fundamental things.
One, a lot of data, access to a large pool of subject matter experts, and significant investment in the infrastructure to train the models on that particular domain. And in SG&A, we feel that there’s a better return for enterprises to subscribe to these solutions that are coming up in the marketplace, and then you subscribe it and then you would fine-tune it and deploy it within your enterprise to do it. However, on the operations side or on your R&D side or on your product side, you would invest in building your own domain specialized. So you see a lot of pharmaceutical companies building their own domain-specialized models to help them with vaccines identification, clinical trials, optimization or identifying, creating new molecules. There was just an article yesterday in the Wall Street Journal, which talks about Gen AI solutions coming in chip designing to do that. So these are all domain specialized, which makes absolute sense for companies to invest in it on your business and product and revenue side.
But in the SG&A side, it is better for these SG&A functions – be it finance, HR, legal, customer service to subscribe to a model that you fine-tune it for your enterprise. So that’s the way to think about it across embedded, native, and domain specialized. Today, when we look at the POCs that clients are building and using, they’re mainly focused on embedded and native Gen AI solutions. We see this domain specialized coming in SG&A functions later on this year. Going into next year, we’ll see more and more solutions being available for enterprises to use.
Joe Nathan:
Got it. Yeah, I mean that kind of clearly plays out the different type of AI solutions available, and your definition around domain AI is also harder, so it may take time for the industry to mature and get more solutions in for that. So I think it’s time to pivot into our research. So the research that we published, we kind of covered all the impacts of SG&A business processes that can benefit from Gen AI capabilities. We covered a lot of depth in the study. We evaluated the impact of Gen AI on SG&A processes within finance, HR, procurement, IT. And within each of the processes, functions, we also looked at processes that include customer-to-cash, high-to-rating processes or to cash and so on.
And within these big process areas, we also went down to the subcomponents of these processes like credit management, credit policy, hiring, talent management – all of that. So Hackett I know that we spent quite a lot of time developing this research and [inaudible 00:14:09] to that, we also developed a heatmap that describes the viability of AI solutions or possibilities across SG&A functions. Can you walk us through our research methodology, develop these heatmaps, and how do you also think this heatmap applies to our group benchmark data?
Vin Kumar:
Sure. So our approach, which we took Joe, was we said, OK, hey, we hack it, understand the SG&A functions. We have a really in-depth taxonomy. We can go all the way down to Level 5 activities to study them, and we understand what those activities are. So we came from our strengths to look at that. So what we said is, OK, we broke the SG&A function to seven functions – finance, HR, IT, procurement, marketing, sales and services. We took these seven functions. We broke the seven functions into 17 end-to-end processes such as hire-to-retire, account-to-report, plan-to-resolve. So we looked at the 17 end-to-end, then we took one level further down, and we looked at 75 Level 1 processes, and then we took a further level down and 305 Level 2 processes across these seven functions. And at these 305 Level 2 processes, we used Hackett’s AI automation framework and applied and understood what activities are most suitable for what type of automation and how Gen AI is going to enable it.
So we did that and that’s what we refer to as you rightly called it the heatmap. Then we took the heatmap and applied it to a benchmark. As you know for us at Hackett, we have metrics at this level of these sublevel two processes, understanding the productivity level of effort required cost of service for at that level of process, applied our heatmap to that, and determined what the impact is going to be both on the cost side of it and also to the productivity and FTE side of it. And then we also looked at if you are going to enable it with Gen AI capability, what is the impact on talent? What’s the new type of additional skills that we may need in the team to do that? So we did and conducted another study to understand that, and we incorporated that in our research.
So looking at all of that, we were able to generate what the impact is going to be on an organization. So we looked at a company – a $10 billion size revenue company – performing at the median of our database. We’ve got low performers, Digital World Class® performers, but we looked at the median of the performance level, applied all our research and findings coming out of our Gen AI effort, which we did, and then we were able to aggregate it up at an overall SG&A level. We were able to showcase a 40% FTE savings that organization will go through by adopting Gen AI in the SG&A functions. And what we also did is we understood that, OK, this is the overall impact it’s going to produce for SG&A function, but we also then kind of put it along a timeline. We understood that this savings is going to come along with the embedded, native, and domain-specialized solutions.
We have spoken to multiple solution providers and service providers, understood their road map of what solutions and Gen AI capability they’re going to provide. We overlaid it with our analysis, trying to see what the glide path looks like of when the improvement and the impact on SG&A is going to be felt and by which function. And we were able to do a glide path, and when we looked at it, we realized the total savings of 40% will be achieved in the next five to seven years, starting from this year onwards. So we feel confident that by end of this decade, we would’ve seen – SG&A would’ve seen – at least 40% savings from an effort perspective. So that’s kind of how our methodology we followed, and how we were able to calculate this. We’ve been speaking with multiple clients – research organizations in the industry – to validate this, and we feel pretty strongly and very comfortable to say that this is what we are going to see in the marketplace over the next five to seven years.
Joe Nathan:
Got it. Yeah, that certainly has a lot of, I mean, you walked us through a lot of complexity or a lot of details behind how we came up with the five- to seven-year road map. One thing I’m a bit curious is typically when we work with companies and we develop a two- to three-year road map, this road map has a lot of caveats and dependencies because companies have foresight into, OK, I can do this this year or probably 18 months, but anything that goes above 18 months, there’s a lot of challenges – a lot of dependencies there. So I do want to understand here, while I understand how we built the five- to seven-year road map and what went behind it, also what assumptions did we come up with as we built the timeline of five to seven years?
Vin Kumar:
Yeah, I think the couple of assumptions, which we took, one was the capability and availability of these domain-specialized solutions. So we looked at, OK, when do we think we’ll see a solution that’s available in the marketplace to help with regulatory filings or for talent acquisition or for legal or for customer service or for marketing lead generation? So we looked at the entire landscape of solutions. We spoke to multiple solution providers in this domain – understood what their expectations were. Understood also is what the challenges they are facing. As I said, one of the key requirements in building these domain solutions – domain-specialized Gen AI solutions – is to get access to data to train them. So we were also trying to see what is the data acquisition strategy these companies have? Where are they going to get it? Is it client-specific data, publicly available data, synthetic data?
We understood that, and so we overlaid on what we thought we expected. Obviously as any prediction, we may have a new technology that is technology talking of as current Gen AI solutions are based on large language models, there are solutions or technology coming on something called small language models to do it. So those may change it, may accelerate that, but we are confident in the five to seven years, we may say you could accelerate. It could be in the three years to do that. But what we wanted to tell clients is set the expectation that don’t expect that you’re going to put a Gen AI solution today and expect 42% saving coming today. It’s not going to happen. It’s a journey that you’re going to embark on, which is fundamentally going to change your way you manage your process and manage your function, and with the objective of it becoming more and more autonomous. That’s the end goal that you as a SG&A functional leader, you’re trying to achieve.
And this is going to enable that, and it’s a journey that you have to go through and you may be much more a mature organization and you could achieve some of this faster or some you may have a lot of technical debt that you may have to overcome before you achieve that. It would vary. But the takeaway for this is it’s a journey that you’re going to do. It’s not a one-year journey. It may not be a seven-year journey for you, but it could be anywhere from that five to seven years to do it. Some people are expecting it to be maybe a three-year journey, and it could be for your particular situation, but expect it to be a journey rather than I put in a solution and I’m going to get that kind of 40% savings immediately.
Joe Nathan:
The study is very encouraging and also it has a lot of details. Again, the savings are not immediate, but at the same time there’s a lot of promise behind getting to the savings. So what are some of the immediate next steps that companies should start doing, especially when they’re getting on their AI journey and they want to get ahead of the curve? So what should they be doing – focused on?
Vin Kumar:
I think there are five things that companies should be doing. First is educate and develop policy. I think it’s important to demystify what Gen AI is and what it’s not, and we all have to come to that understanding of how should we be looking at it and addressing that. So there’s a period where, and there’s an activity of us educating it just not at the executive level, but we’ve got to come down the organization because they all have to understand what this is, so they can determine where they can use it and how to use it. We also have to develop policy on how do we use Gen AI? How we should not use Gen AI? What is the impact of Gen AI on our supplier contracts and our customer contracts and with our employees? So how do we want to do that? So there is a whole focus on the policy side of it. So that’s the first step.
The second one is we’ve got to do some sort of mobilization. We’ve got to do it through deliberate experimentation. We’ve got to get some pilot license available for our teams to go and experiment and really touch and feel on what the capability of Gen AI is to do that. The third step is start looking at building some POCs of let’s build a solution – let’s use it. Understand what the efficacy of the solution is. What is the cost of running the solution? What sort of value that it’s going to deliver. These POCs give you the variables to build your long-term road map. The fourth step or action we have to take is talk to our solution providers and service providers today that we are already using. Understand what their road maps are so you don’t have to try and solve for it and wait for that to do it.
Or if somebody is going to deliver a solution two years from now and you want to do something in between, you can, but you need to understand because all of them – all your solution providers and service providers – are enabling and providing Gen AI capability in their services to you. And the last one is the outcome of this – what we expect companies should be doing is you’ve got to come up with your own AI road map for your function. What does that include? It includes what new solution you want to be using? Where do you want to use it? How much you want to invest in it? What is your expectation, outcome or return on investment? Who is going to do this for you in your function to do that?
That is what we want to start building out the end of the year, and we’ve got to do the first four steps to understand and build that road map. So then going into the next calendar year of 2025, we are going to be investing and executing that road map. One thing I want to ask you, I know Joe, you work with a lot of clients in your transformation and capability. What should the CIO’s office do? What do you expect them to be doing? How are they going to enable the functions to be thinking about it? Anything that you can share with us, Joe?
Joe Nathan:
Yeah, absolutely, because AI is not just a one and done technology project. So again, and companies that want to sustain your AI investment and also use AI responsibly because you certainly don’t want to be in the media for using AI the wrong way or just falling into the biases that AI sometimes can drive because everything is all an experimentation right now. A lot of AI products that are in, they certainly, there’s going to be a human element of something – human element of making sure this ties to your values and how you get the results and use it. So there is a big focus on operating model aspects as well. And typically the CIO organization is the one that is, because they are the center of it, they see a lot of business teams using AI. So there is going to be an element of having some kind of common standards across the organization.
Let me give some examples of areas that we look at. You’re talking about policy and governance. So this is a very critical aspect, especially as you’re expanding your AI use cases. You have a lot more, once you start expanding, many teams are using it and the oversight is reducing. So you want to maybe establish right policies on what’s the safe usage of AI? What data can AI leverage? And some AI tools also has your data out in the open, so what kind of policies do you have around it? So all of that is very important as you define your policies and governance on right usage of AI – both to protect yourself, your employees and your customers. So that’s going to be an important part.
The second aspect of it is around operational readiness. That’s a part that many of our clients either underestimate or sometimes even don’t think about it. When you bring in a new AI system or an AI solution into your landscape, it’s easy to bring it in because many of these are SaaS solutions. You can just bring it in, deploy it and start using it. But then what about all the data that it’s creating? How do you support it going forward? Or sometimes even these tools are, and the model is evolving constantly, and so sometimes what you built it for, it changes drastically. So how do you make sure that support is being provided? Just make sure that the readiness from an operation standpoint, that’s being managed and as the number of AI solutions grow, this also has to scale up according to your growth area and what type of AI solutions you’re bringing in. So operational readiness and support is going to be another big factor. The third one I would talk about is overall AI strategy and standards.
There’s always a goal if you try to bring in the best of best-in-class or best-of-breed AI solutions. There’s also challenges around are these all kind of aligning to your enterprise standards? So you need to establish your standards on what type of AI… And you brought it, you mentioned those type of AI solutions. You have embedded solutions, native domain solutions. So you want to make sure where do you want to grow your expertise and what are the areas? What type of solutions are you trying to bring in? And also make sure that the strategy is established. There’s good governance around it. So those are three things I would call as important aspects that you should not miss out, especially when you’re growing. If you’re in experimentation mode, it’s OK to start getting this, but as you are growing, as you’re maturing, you’re scaling, you want to make sure that you bring in these key aspects, and the CIO organization is right in the middle to bring this all in.
Vin Kumar:
I think that’s a fantastic thing that you bring it up, Joe, because this, and we saw this when we saw this in the other kind of automations, which came up. I mean seven years ago when RPA came, the perception was give an RPA tool to everybody, and they’ll just go and automate and quickly realized that was the wrong strategy to do. So to create this kind of center of excellence, this operating model is critical. So it’s easy to do POCs, but if you want to really scale, you need to have the discipline of that center of excellence and this operating model that you do. And in this case, especially with the Gen AI, I really, really think that CIOs own this and have to be in front of this and thinking about it – of how they’re going to establish it so that even with when they’re establishing policy, establishing governance, and there is a lot of regulatory requirement coming in the space and the regulatory requirement applies on different use case differently.
So it’s not just one you can use or cannot use. It’s very different based on the use case. So they have to kind of codify this, bring that discipline into these solutions, and creating this operating model within a center of excellence is going to be critical that support that CIOs can give. So hey, thanks for sharing that. And this was great – I think a really exciting topic that all our clients are constantly asking folks like Joe and me to talk to them and to demystify, and help them through this journey.
Joe Nathan:
Yeah, thank you.
Announcer:
Thanks for listening. You can find audio, a transcript of each episode and other resources at podcast.thehackettgroup.com. If you like this episode, please share it. You can also subscribe on your favorite listening app, so you never miss an episode. We welcome your feedback by rating this or any episode, or send us an email at podcast@thehackettgroup.com. The Hackett Group is a leading benchmarking, research advisory and strategic consultancy that enables organizations to achieve Digital World Class® performance. Learn how we can assist with your transformation journey at www.thehackettgroup.com.