For years, banks have used artificial intelligence for incremental gains, but a fundamental shift is now underway. With the rise of agentic AI, AI technology is moving beyond simple assistance to become an autonomous actor capable of driving business outcomes. For traditional banks, this represents a critical opportunity—not just to play catch-up in the AI race, but to take the lead in shaping the future of financial services.
Many banks excel at experimenting with AI, but face challenges in operationalizing it. The problem isn't the technology; it's the gap between IT and the business. According to Gabby Martin, pilots often fail because business teams aren’t consistently involved from start to finish.
To achieve real impact, AI initiatives need dedicated business champions and top-down executive commitment. “Adoption doesn't just happen because you deploy the solution,” Martin notes. "You need internal advocates to encourage the team." This means formally assigning business leaders to AI enablement roles, ensuring that organizational change management is at the heart of your AI strategy.
The most immediate and valuable application of agentic AI is in solving the data problem. Most banks have critical data locked in separate, disconnected systems. An "ask your data" solution, or an enterprise neural mesh, creates an intelligent layer on top of this data.
“Every organization will need some form of an enterprise data mesh, with agentic AI layered on top, that can answer any business question by discovering where the relevant data resides,” says Martin. This unified data layer serves as the foundation for everything else, providing a single source of truth that powers both simple queries and complex, automated workflows. It requires strong data governance and security to manage access and ensure compliance.
Traditionally, AI in the contact center has been about deflecting calls and reducing handling times to cut costs. Agentic AI flips the script to focus on revenue generation. By creating a unified profile of each customer—stitching together call history, chat logs, and transaction data—the system can predict needs and influence outcomes in real time.
“Before we even pick up the phone, we should know their propensity to churn or why they're calling,” Martin explains. This enables hyper-personalized service, such as providing a next-best offer during a live call or making proactive outbound calls to prevent churn or capture a cross-sell opportunity. With the right ethical guardrails and human oversight, the contact center can become a powerful, real-time revenue engine.
Legacy systems are a primary obstacle to innovation. They’re often poorly documented, and the talent needed to maintain them is scarce. Agentic AI offers a powerful solution for accelerating the software development life cycle (SDLC).
AI tools can now analyze and document millions of lines of legacy code in days, a task that once took months or years. This documentation then provides the context for AI agents to assist throughout the entire modernization process—from generating new UI mock-ups to writing production code. The shift from a "human in the loop" to an "agent in the loop," where AI autonomously drives development, dramatically shortens timelines for launching new products and features.
Agentic AI is no longer a future concept; it’s an operational reality. It offers a clear path to compressing decision-making, unlocking new revenue, and modernizing core infrastructure. As AI’s role expands from assistance to ownership, the key challenge for leaders will be one of governance, trust, and control.
- Chapter 1: Why do most AI initiatives fail to deliver ROI?
-
Frederic Miskawi:
Welcome to CGI's From Transactions to Trust Podcast. I'm your host, Fred Miskewi. And there's been a quiet shift happening inside banking right now. Not another digital transformation or another AI pilot, and not even another innovation lab experiment, and we've seen a few. Something bigger.
For the last decade, banks have invested billions in technology, yet most of it has made them faster, not fundamentally different. AI has largely been treated as a tool, a cost reducer, a chatbot, a proof of concept. But what happens when AI stops being a tool and starts becoming an actor? An autonomous decision maker, a digital workforce, an agent that doesn't just assist humans but takes ownership of outcomes. That's the idea behind the Agentic Bank.
And here's the uncomfortable question. If fintechs are born digital, if big tech owns the customer interface, if legacy systems are still holding most banks back, then how does a traditional bank not just survive, but lead? Today I'm joined by Gaby Martin, CGI expert on NGT AI in our US AI national team. We're going to talk about why so many AI initiatives fail, what actually delivers ROI, why the contact center might become a revenue engine, and why this isn't a technology shift. It's a leadership test. Because the real question isn't, “can banks adopt agentic AI?” It's, “will they have the courage to rethink what a bank actually is?” This is the conversation about moving from ROI to market leadership. Let's begin. Hi Gaby. I'm going to head over to you for an introduction to our audience.
Gaby Martin:
Sounds good. Thanks for having me today. I'm happy to be here. As Fred mentioned, my name is Gaby. I am a part of our US National AI strategy team. I started my career very hands-on—building AI solutions. I then moved to leading a team of AI engineers and developers to deliver those solutions for our clients, both across banking and many commercial industries, as well as the public sector. And now I spend most of my time advising clients on what is the right AI solution to implement. How are they going to calculate ROI from that? What does adoption need to look like across the enterprise to really transform their business? So excited to be here today and talk about this.
Frederic Miskawi:
Thanks, Gaby. And by the way, don't tell the other experts, but you're my favorite. I love it. It's always a pleasure to work with you. And I know you've been busy lately, and we've been talking about this, and we've been pulled into a thousand different directions. What's been a trend for you in terms of the nature and the type of support that you provide to clients?
Gaby Martin:
Yeah, I think something I'm seeing a lot right now is that people are experimenting. People are doing AI. We're shifting into productionalizing AI. And that is where we're seeing a lot of hurdles that clients are starting to overcome. They thought about the governance, they thought about the process, they set up experimentation. But how do you truly productionalize large-scale AI solutions that are going to be used by thousands to millions of either end customers or internally as well to thousands of employees that you have? And the architecture changes, the adoption methods change, even from a legal perspective, you need to think about what you're doing. And so I think a lot of organizations have done a good job of setting up their enterprise to experiment with AI, but they're really struggling right now to production-wise really large-scale solutions.
Frederic Miskawi:
So, take that large-scale digital transformation that you're referring to, and each of the aspects of the guardrails that we have to put in place. And how do we rectify that with all the pilots that have been failing? And both you and I have been involved in helping rectify some of these. So, why have so many pilots failed? And how do we ensure that we're getting real impact and real business benefit from the efforts going into pilots?
Gaby Martin:
Yeah, I I can think of two things. The first, and I've been passionate about this for years, even when we were building machine learning models and not agentic AI, is there needs to be champions in the business that are present from the start.
I think a lot of times, you know, we come up with an idea, even the business comes up with the idea, but then they hand it over to the development team, and they're the ones that are going and trying to figure out what data is needed, how they're gonna build this system. And then at the end of the day, when it gets deployed, or it's in a pilot, it fails because the business comes back and says, “hey, it's not giving the answer I want, it's wrong.” And they're upset because they're now wasting more time using this tool than if they were doing it themselves because they're not getting to the right answer. And sometimes that comes from them not being involved in the process, but it also comes from them not being trained properly on how to prompt the AI to get to the answer that they need.
So, I'm very, very passionate about needing to have the business along every step of the way of that development and then identifying champions when you go to pilot that are going to help drive adoption. Because adoption doesn't just happen because you deploy the solution. It happens because you're talking to someone on your team and you're talking about a task that you got assigned, and they said, “oh, I finished that in a minute with AI versus spending two days to complete it.” So you need those internal advocates to really encourage the rest of the team to adopt the solution.
Frederic Miskawi:
So internal advocates, the ability to cater to and support the individuals going through the process. Do you feel there's a lack of business commitment, perhaps?
Gaby Martin:
I think everyone has a lot of things going on right now. So yeah, to put it nicely, yes, but I think everyone's being expected to do more with less. And a lot of times they're not viewing the business being involved in this project as something that's fundamental. It's just when they can get some of their time, and that's not going to make this successful.
Frederic Miskawi:
Yeah, and part of what I've seen for success of these types of pilots is a drive from top down, as well as a drive from bottom up, and concrete examples of how this technology can help on a day-to-day perspective. You go through training, and then if you don't apply and you don't know how to apply, you forget. And the training was for nothing. And if you're not provided clear value and sustained kind of support when you bring in these experiments, they are becoming just that: experiments. And they don't scale.
Gaby Martin:
Yep. And to that point, people are going to follow what their leadership does. So it needs to come from the top down, but also leadership needs to be willing to invest and say, okay, you know, your new role is to help enable AI in this area, and you're gonna be our representative from the business side. And that means that your other responsibilities need to be taken away and someone else needs to be able to focus on that so that you can truly dedicate your time to, you know, focusing on those AI solutions that need to be productionalized.
- Chapter 2: What is the first quick win for agentic AI?
-
Frederic Miskawi:
Yeah, to me, that goes back to the basics of organizational change management and psychology. That's what a lot of what's at the root of these failures, at least from what I'm seeing, either me, the line expectations, or a failure to take into account the psychology involved with the people that are being impacted by this technology. What's the first quick win for agentic AI that delivers clear ROI from your perspective?
Gaby Martin:
Yeah, so I think something we're seeing a lot across every single industry right now is an ask- your-data solution or a natural language search against all of the data. So think about any enterprise, right? They have data in five, six different core systems, whether that's their ARP or their data warehouse that they have or whatever it may be. A lot of times, they haven't centralized all that data in one place. And so they're going to different teams to get information to make decisions. And when you start building end-to-end workflows from an agentic perspective that need to go into each of these different solutions and get the data to then go make a decision, you need a layer that can facilitate all of that, which is what we call ask your data.
So, whether you're building it as a standalone chatbot that is connected to all these different source systems, or you're building it as microservices and APIs that you can then plug into other agentic solutions down the line, every business—and I'm again very certain on this, I will say this as a fact—every business will need some sort of enterprise mesh with agentic AI on top of it that can go in for any business question and figure out where that data lives. Go get the data, bring it back and synthesize it and mesh it together with other pieces to actually give that natural language back to another agent to then go take action, or give that natural language answer back to a customer.
Frederic Miskawi:
Yeah, and that goes back to the concept that I call the enterprise neural mesh, that layer on top of the data, the data connections, the connectors, the data pipelines to be able to bring it all together and provide that insight, build a digital twin, and then be able to act on that, oftentimes automatically. So if everyone has access to that kind of insight, does that really improve decision quality? Like, how do you get to balance the chaos involved in risk mitigation and the proper decision-making?
Gaby Martin: 9:38
Yeah, absolutely. So, there are definitely controls that need to go in place for that, right? When you think of security that's needed and who can actually access what data, and the pass-through that needs to happen between different systems, to say, " oh no, wait, that's private information, that's IP, we can't actually go get that back and return it to you. So, that's one step of the puzzle.
I think the other step is the guardrails that need to be implemented into the solutions themselves. So, is harmful content being sent across? Are we making sure that we're protecting it from prompt injection and things like that? So, all that comes about when you're starting to release these agents that are going in autonomously, right? Finding the data that they need. And this goes back to the day-old: your data foundations must be in place, your data governance must be in place, and your data security must be in place. AI is not going to necessarily do that for you. I know we're coming out with some things in AI that can help with that, but that foundation needs to be really set before you put a solution like agent tech AI on top of it.
- Chapter 3: How can AI transform the contact center from a cost center into a profit driver?
-
Frederic Miskawi:
Now, when we talk about this strategic topic from cost center to profit driver and proving the value that we're getting from this technology, one aspect of it is also contact centers, and especially true in banking. So, how can AI in the contact center move from cost saving to revenue generation?
Gaby Martin:
When I think of cost saving in a contact center, I'm thinking of trying to deflect calls. So, can we handle that autonomously before it even needs to get to an agent? Can we reduce the handle time of the agent talking to the customer and therefore serve more customers? And overall, we're trying to cut costs. So, when you say cost saver, those are the things that we continue to do, and they work well.
But when we think of the world of agentic AI and generating revenue, I think we can take it a step further and try to influence outcomes on those calls as well. And that starts with detecting the sentiments and detecting the topics and having a history of all of that so that we can then predict in real time, “hey, we've seen this sentiment before, we've seen this topic before.” This is the next step that you need to take with this customer. So, giving a next- best offer or action live during that call with the agent and customer.
We've also seen real-time translation and transcription. So now being able to actually talk bi-directionally when two people speak different languages, right? And hopefully, increase that customer satisfaction. And this all comes back to that hyperpersonalization that I think we're seeing in the world of agentic AI. How can we stitch together all those different pieces that they emailed us, they reached out via chat, now they're calling over phone, we saw that they missed a payment, or maybe they flagged something as fraudulent. Before we even pick up the phone with the customer, we should know maybe the propensity to churn or why they're calling in, so that we can be better prepared to help them.
And lastly, to even take that one step further, I've been talking with a lot of organizations about outbound calling. So before even waiting for the customer to call you, you proactively identify that you need to try to cross-sell to them or you think they're gonna churn, and you need to reach out before that problem even occurs, or jumpstart that opportunity.
Frederic Miskawi:
So, I think what you've just done, Gaby, is lay a path from cost center to a true real-time revenue engine. And you've talked about hyperpersonalization, which is incredibly important, regardless of the industry. We're seeing customizations, hyperpersonalization across the board. But at what point does that cross the line into manipulation and where's the ethical boundary associated with it?
Gaby Martin:
Yeah, that's a good question. I would say today, everything that we do is being tracked. And if you think of it from a banking perspective, your bank has all of your transaction history, right? And where you're spending your money. Your banks also know if you contacted them and what was talked about on that call, because they have the transcription, or at least they can get that transcription. So those are all things, as consumers, that we are trusting those banks to secure our information and to do the right thing with them.
So I think it starts with, you know, understanding that if you're going to sign up for a service with these organizations, they have access to all this data. But there may be sign-offs that you need attestations, right, at the bank to say, just so you know, this information is going to be used to help make recommendations and to personalize your experience. So it's kind of that opt-in that we've always seen. But at what point are we going to say that that's actually, you know, regulated and that banks have to do that, or other industries have to do that? I don't think we're there yet. And I think today there's a bit of a free-for-all of what's happening with everyone's data.
Frederic Miskawi:
It does feel like it. There are definitions, like when you talk about uh credit collections, for example, where the preference for communication is something that needs to be respected. So, how do you make sure with this type of ecosystem through agents, how do you make sure that these preferences are held, that these agents don't make a mistake and call when they're not supposed to?
Gaby Martin:
I like to think of that as hard rules and not to go back to rule-based systems, but you have agents that are making decisions on their own, but you need a check in there. As the agent is going to go search through that data set or go reach out to that customer, there has to be a rule in place that says, “okay, now at this step, when the agent has made it here, this is the next thing that needs to happen. This is the next thing the agent must check. And if the agent doesn't go check that, we're not going to be able to take the next step.”
So. there does need to be some sort of step in there that is forced by the agent to take, whether you are hard-coding that or not, that can be a decision that you look at. I also think human-in-the-loop is so important, especially for banks that are not as comfortable in this space, and they don't necessarily trust what the AI can do. You start with a human-in-the-loop to review and approve those actions.
So essentially, the agent comes up with all the steps that need to be taken, but a human will review it and let it know if that's okay or not. That way, the human has the ultimate control before it goes and takes that action. And over time, as the human is reviewing and saying yes or no, the agent will learn and hopefully you'll see that accuracy go up to where you get to a point where it's 99.9% or 100% accurate, and you're ready to release that, you know, autonomously without a human reviewing.
Frederic Miskawi:
And Gaby, human-in-the-loop has always been very important to you and I. And we talk about that to clients quite a bit. In this case, though, every time you've got human in the loop, there's a certain amount of costs associated with that. So, at what point do we switch to AI in the loop? And how do you balance between human in the loop and AI in the loop?
Gaby Martin:
I would say, what's the cost of showing up in the news for your AI making a decision you didn't want it to make? That's kind of the place I always go first, right? You should assess whether humans should be in the loop based on how critical the information that the agent is acting on or the AI is acting on is making.
So, a very simple example I always used to give is if you were to predict the weather and you predict the weather wrong to a consumer, are they going to sue you for that? Are they going to get upset about that? Probably not. They might go outside and get rained on when they didn't expect to get rained on. If you are in the health space and you predict that a patient has some sort of diagnosis and they don't, that's very much going to affect their mental health. And on the opposite side, if you predict that they don't have a diagnosis but they do, they could be losing a life. So the severity of what is being predicted and acted on should always be what drives the decision for human in the loop or not.
- Chapter 4: How can banks overcome legacy systems with agentic AI?
-
Frederic Miskawi:
So, we were talking about contact centers and the cost associated, human-in-the-loop. When I look at the vast amount of legacy systems that are powering these contact centers and back-end systems for the banks, we see an environment that is ripe for modernization, for transformation. So, from your perspective, Gaby, how can agentic AI banks overcome those legacy systems and enable them to launch products faster?
Gaby Martin:
The first thing that comes to my head is software development, life cycle, acceleration. We've worked with so many clients that are sitting on legacy systems where these legacy systems are not even documented. And so they're struggling to say, “how are we even going to be able to modernize this? Oh, and by the way, the talent that we have to manage this legacy system, we're not hiring those people anymore. They're not even teaching that coding language anymore. So, how are we going to be able to maintain the system in production much longer?” And the power of agentic AI in software development, I would say, is beyond any other area right now where we're seeing the most efficiency gains.
I think we documented over 27 million lines of code, all with agentic AI in less than a week span, using the tools that we have available today with GitHub Copilot to be able to do that. And it was astounding that we then took that documentation and we were able to help create the new code for the new system. So it goes back to these steps of: use it first to figure out what the current system is doing, just like we used to do our current state assessments. It can help with that. And then as you move forward, you use that documentation that has been created to then, you know, drive the new development of the new system.
And it's not just development. This is helping project teams, it's helping project managers, it's helping business analysts, it's helping the QAs when they're doing the development. So it's really the full end-to-end lifecycle that it takes to modernize a system. And when one area of that where we're seeing it works really well is on mocking up a UI. I mean, that can happen in minutes nowadays with a really good prompt. And once you have that mocked up UI, you also then feed that to the agentic AI system and it's able to generate all the code that goes behind that front-end design. So all those pieces can be stitched together to really shorten that timeline to get to a new a whole new product or even develop a new feature in that product.
- Chapter 5: The path to market leadership
-
Frederic Miskawi:
Yeah, when it comes when we talk about legacy modernization, we're starting to see a shift in SDLC acceleration. One from human-in-the-loop that is using that documentation, that's helping provide guidance through prompts and and helping accelerate the development process, to one where the agent is in the loop and the documentation you were just talking about, you're just mentioning, is now used as context for the agents to understand what is needed to be done and how to orchestrate to be able to start doing this legacy migration in a more automated fashion. S
So, in your experience, what have you seen in that space? Are you starting to see that shift to agent in the loop to documentation like this, technical documentation, documentation of legacy systems being used purely as context for agents?
Gaby Martin:
Yeah, absolutely. I actually was just at an alliance gathering the other day for one of the platforms that we use with many of our clients, and they actually just released a well-architected framework that contains all of their architecture best practices, all of the new features in their tools, all of that is documented nicely. And they have an agent inside of their platform that is connected to all of that well-architected framework. So anytime you need it to go build something, it's actually using that context and retrieving that to go build it so, that you're not having to consistently pass it all of that information. So, I would agree we're seeing people do it custom, and we're also seeing platforms starting to embed that directly within their systems.
Frederic Miskawi:
Beg driven agent native software engineering. It's a mouthful, but that's that's what I'm seeing as well. And that's across industries, especially true over the last, let's say, three months and the time of this recording in mid-February, a lot has happened over the past two months when it comes to software acceleration, clawed code, latest version of codecs.
Gemini and Google is about to release a new version, which will be escapable from a coding perspective. So, now we're starting to see these documentations that we're generating, the interviews that we do for legacy modernization. We're starting to see all of that as context in curated information that is being fed to agents, where the agents are the primary value drivers. And that's an amazing shift that we're seeing happen as we speak. So we're going to be cutting this podcast into two different parts, part one and part two. In this first part, we started this conversation talking about ROI. And what's clear is this agent TI isn't theoretical anymore, it's operational. It compresses decision time, turns contact centers into revenue engines, it unlocks value hidden in data banks that have been owned for years.
I think the opportunity is real, and so is the competition. Once AI starts influencing outcomes at scale, this stops being about efficiency. It's about ownership, it's about leadership, it's about accountability and control. And I think this is where the conversation gets more complex. So in part two, we're going to move beyond opportunity and into governance, into leadership, and what it really takes to build trust in an agentic bank. Because the real transformation isn't just technological, it's structural. So join us in part two. Thank you, everyone.