For years, banks have invested in AI to improve efficiency by accelerating processes, reducing costs and automating routine work. That focus is now evolving. As AI begins to influence decisions at scale, from approving transactions to shaping customer experiences and managing risk, the conversation is shifting. Today, success is no longer defined by efficiency alone. It is defined by how well organizations ensure accountability, strengthen governance and build trust in AI-driven decisions.

In this episode of From Transactions to Trust, Frederic Miskawi and Gaby Martin explore what it takes to become an agentic bank. They examine how roles are evolving, where banks can unlock new value from data and how leaders can guide organizations where humans and AI make decisions together.

Driving accountable decisions at scale with AI

The first wave of AI delivered measurable efficiency gains. The next wave introduces autonomy, and with it, greater responsibility. “Once AI starts influencing outcomes at scale… that’s no longer about efficiency, it’s about accountability,” says Miskawi.

This shift raises critical questions:

  • Who owns an AI-driven decision?
  • Where does liability sit?
  • How can banks move quickly while maintaining trust?

For many institutions, these are not technical challenges; they are governance challenges. Addressing them effectively enables banks to scale AI adoption while maintaining regulatory confidence and protecting client trust.

Enabling better business outcomes through judgment-led leadership

As AI takes on execution, the value of human contribution is changing.

“We don’t need people who can perform tasks really quickly because the AI can do that for us,” Martin explains. “We need judgment leaders.”

These leaders define problems, guide decisions and ensure ethical oversight of AI systems. Their impact is measured not by tasks completed, but by outcomes delivered, such as revenue growth, improved client experiences and reduced risk exposure. This shift helps organizations focus on what matters most: business impact.

Shifting mindsets to unlock measurable business impact

While AI capabilities are advancing quickly, many organizations are still catching up in terms of mindset. Professionals often describe their work in terms of tasks rather than outcomes. A developer may focus on building a feature, rather than recognizing how it improves client retention or drives revenue. Martin challenges this thinking directly: “That’s not what you did… you’re actually helping the organization generate revenue… or reduce churn.”

By reframing work around outcomes, organizations can:

  • better align teams to business priorities
  • improve decision-making
  • accelerate value realization from AI investments
  • focus on the value that needs to be delivered for the client

Strengthening trust and governance in AI-driven environments

As AI systems scale, governance becomes more complex. Traditional approaches, such as manual review, are no longer sufficient. “When you have an agentic ecosystem that’s producing a hundred thousand lines of code in a day, humans are not able to review every line,” Miskawi notes.

The risk is that oversight can become superficial rather than effective. To address this, organizations are redefining the role of human oversight. Instead of reviewing outputs, teams focus on validating intent, logic and design. Governance becomes a shared responsibility, embedded across processes and teams.

“I don’t think it’s just one human in the loop review that’s going to be able to save us,” Martin says. “There’s more of a process… multiple people saying, I trust this and I put my name on it.”

This approach helps strengthen accountability, reduce risk and build confidence in AI-driven decisions.

Unlocking new value and insight from unstructured data

While governance and workforce transformation are top priorities, an emerging significant opportunity is unstructured data. “I would say the biggest untapped gold mine is unstructured data,” Martin notes.

Banks already hold vast amounts of information, including call transcripts, chat logs, emails, and transaction narratives. When activated, this data can deliver a more complete understanding of client needs.

Banks can move from reactive service to proactive engagement, anticipating needs, improving personalization and increasing cross-sell and retention opportunities. As Martin notes, frontline teams could understand a client’s needs “with 80 to 90% confidence” before the conversation begins.

Accelerating value through practical, scalable AI adoption

Becoming an agentic bank does not require a perfect starting point. Transaction data, which is structured and widely available, offers a practical entry point. Spending patterns can reveal intent, predict needs and trigger timely engagement.

From there, banks can scale into more advanced use cases, incorporating unstructured data and more sophisticated AI capabilities. The key is to start with clear, value-driven use cases and build momentum. This approach enables faster returns, reduces risk and supports sustainable transformation.

Leading long-term value creation in an AI-driven enterprise

Leadership plays a defining role in the success of agentic AI. AI investments often require upfront commitment, while the full value emerges over time. Focusing only on short-term cost savings can limit long-term impact. Martin illustrates this with a simple analogy: “If you think about electricity… was it cheaper than lighting a candle? It wasn’t. But look at what it enabled.”

AI delivers value in a similar way. It enables new business models, improved decision-making and greater organizational agility. Leaders who focus on long-term outcomes, rather than short-term efficiencies, position their organizations to compete and grow.

Redefining how humans and AI deliver competitive advantage

“Agentic AI isn’t just about efficiency… it’s about ownership, decision making, speed, trust,” Miskawi reflects. The organizations that experiment will improve. Those that act strategically will compete. But those that rethink how humans and AI work together to deliver value will lead.

Looking ahead, the defining question will not be who adopted AI, but who used it to drive meaningful outcomes and build lasting trust.

Chapter 1: Driving accountable decisions at scale with AI

Frederic Miskawi:

Welcome to the CGI from Transactions to Trust podcast, where we talk about agentic bank. And this is part two. So, in part one, we talked about the opportunity, the ROI, the revenue potential, the competitive edge. But once AI starts influencing outcomes at scale, approving decisions, shaping customer experiences, prioritizing risks, I think the conversation changes. That's no longer about efficiency, it's about accountability. So, who owns the decision? Who carries the liability? How do you innovate boldly without eroding trust? In this second part of our conversation with Gaby, we're going to go a little deeper. We're going to go into governance, we're going to go into workforce transformation, we're going to go into what leadership actually looks like in an agentic bank. Because the real shift isn't deploying AI, it's redesigning how responsibly it works.

So, let's continue. Gaby, if AI can redesign these systems faster than our internal teams or our clients' internal teams or joint teams, what does that mean for the future of these types of roles? What does it mean for the future of the developer, the future of the quality engineer? What does that mean? How are we seeing these roles evolve?

Chapter 2: Enabling better business outcomes through judgment-led leadership

Gaby Martin:

Yeah, I think about this a lot. I think historically we've been very focused on tasks and measuring the performance of tasks, right? How many tickets did you work? How many hours did you work today? And I think we're going to start shifting in the future to we don't need people that can perform tasks really quickly because the AI can do that for us. We need judgment leaders. We need people that can really focus on framing problems correctly. They can go make strategic decisions around that. They can think creatively about new features that need to be developed. And they need to know how to supervise AI and make ethical judgments. So, I really think we're going to see that shift from task performers to more judgment leaders.

And for that to happen, to take it one step further, leadership from the top-down needs to redefine metrics. So, before, when you used to say, how many story points did you complete today? That's not going to drive your developers to be judgment leaders. You need to think about the business impact that they're now driving, the revenue influence, maybe the risk reduction. So, I think there's new ways we need to measure our employees as well.

Chapter 3: Shifting mindsets to unlock measurable business impact

Frederic Miskawi:

Yeah, the KPIs that we're focusing on, we're seeing that evolve in the dashboards that we're creating for the enterprise neural mesh and what we measure, how we report it. And a lot of what I've been doing is to ask people to focus on value. So, whether you're a developer or a quality engineer, focus on the value that needs to be delivered for the client. And the tools are just that they're tools, techniques, methods, processes that enable us to get to that kind of outcome. And I think that shift is difficult for some people. Are you seeing that kind of psychological roadblock for some of our experts, developers and quality engineers to make that shift?

Gaby Martin:

Absolutely. If you go ask any developer, I would say go ask one right now. What did you do today? They will tell you, oh, I worked and coded this many lines and I built this new feature. And I've been coaching them to say, no, that's not what you did. What did you do? Okay, you built an AI recommendation system. What is that recommendation system going to do? Let's just say in banking, it's going to help us cross-sell, or maybe it's going to help us understand a customer that's about to churn. So, what you're doing is actually helping the organization generate revenue through cross-selling, or your actual task is that you are trying to reduce churn so that you guys can generate more revenue as well. And I'm trying to make them see the bigger picture, right? Again, not just tasks that they're doing, but what you just said, that value that they're driving with the solution that they're building. Because I really truly believe, you know, technology has proved it's capable. And we're only, what, almost three years out from when Gen AI got released.

Frederic Miskawi:

Feels like an eternity ago.

Gaby Martin:

Yeah. But imagine where we're going to be in five years or in 10 years. I have no doubt that technology is going to be able to perform every task. And so, people really need to be thinking about what drives a business and what's that value and what's their strategic mission.

Chapter 4: Strengthening trust and governance in AI-driven environments

Frederic Miskawi:

In a weird way, often think about copilots in airplanes. Pilots can easily fly 737s by themselves now with all the technology involved in the plane, but we still require co-pilots in that cockpit. Why? Because of risk mitigation. Also, it's a lot more fun to have two people in the cockpit that can talk to each other versus one person that potentially could fall asleep. I think about that a lot. As these roles are changing, and we're talking about developers starting to move away from the keyboard and letting agents do all the typing. And forcing a certain amount of human in the loop, even though, yes, this technology can handle it all by itself, gives us risk mitigation. It's an insurance policy, and it enables us as well to train the next generation of orchestrators and experts that can provide oversight on these ecosystems.

Gaby Martin:

Agreed. And I would say with that, it makes me think about what are the skill sets that people need to be focusing on. And I think it's to what you're saying, people that have very deep industry expertise in a specific area and learning about that. Yes, it's great to learn about technology, just like I'm sure it was great for pilots to still have that foundation and learn how to fly a plane. But at the end of the day, they need to be able to have a specific expertise in an area where if there is a risk, they're going know how to handle it. Or, if a new idea needs to be created, they understand the industry that they're in and what's going make sense there. Just like if a pilot has to act because a plane's going down, they're going be able to do that, right? So, I think that's where the focus should be moving forward.

Chapter 5: Unlocking new value and insight from unstructured data

Frederic Miskawi:

That makes sense. In the general topic of competitiveness, the ability to keep up to date with the Joneses and to become an agentic bank, I would like to go back to one of your core strengths around data and the ability to mine gold out of that data. You're amazingly good at that. What's the biggest untapped gold mine in banking, more specifically, that you see? And what recommendations would you have?

Gaby Martin:

I would say unstructured data. I think over the past couple of years, people have gotten really good at getting insights out of structured data, whether that be reporting or those ask your data type solutions, but we're still sitting on a massive trove of unstructured data, whether that's transcriptions from call centers, chat logs, email threads, complaints and dispute narratives, whether it's that real-time transactional behavior that we have, but not just looking at this transaction happened, but they happened in this specific order and looking at the types of transactions that are happening and the different companies that they're spending their money at. That's more natural language processing to pull all of that out and really understand their spending pattern and maybe even life stage transitions that the customer has, or how we can cross-sell to them.

So, I would say this customer 360 view, even think about online when they go and click through your website. How are you actually making a story out of everything that's being clicked on to then understand that hyperpersonalization that we're trying to get to? And the easiest way I can make this make sense is when a customer ends up coming into your local branch at your bank to chat, and they didn't give any reason for what they wanted to talk about. Whoever's going to help them using all of this unstructured data that's been combined to create this customer 360 view, that person is going to have a really good idea, hopefully with 80 to 90% confidence of how they can help that person today and be able to help them much faster to increase that customer satisfaction and make them a long-term loyal customer to continue to generate revenue.

And I think as technology advances, so many new companies are going to start, right? People can do that pretty quickly, now. You need to have a strong focus on customer loyalty, customer buy-in. We get less and less patient by the day. People expect things at the snap of their fingers because of the technology that we have. And so, we need to be hyper-focused in the banking area on how we can serve our customers very quickly and know what they want before they even get there.

Chapter 6: Accelerating value through practical, scalable AI adoption

Frederic Miskawi:

Yeah. And that know your customer approach. We've seen many different programs in that space with financial services firms. What would be the best way to activate it at scale for a bank that may not have necessarily taken it on in the past and they're looking to either scale it up or taking it on for the first time? What would be the best way to scale that up?

Gaby Martin:

I would say the first place to start is looking at those customers' transactions. You have that data for years and you have many customers' data for years. You can find those patterns really quickly, and then you can actually start comparing customers to one another, just like your typical recommendation systems. And you can say, okay, when this customer's spending started shifting into this area, they started looking to open up these new types of bank accounts, XYZ. And then you can say, this customer's spending in the same way. I'm going to go ahead and proactively reach out and cross-sell and see if this is something that they're interested in. So, I think since that's you know more semi-structured data, you just have to pull out some of that information from a pattern perspective, as well as maybe some of that natural language of where they're spending, that would be a really good first place to start.

Frederic Miskawi:

Yeah, that makes sense. And I mean, it's a heavily regulated space, right? But there's value to be able to walk up to the line of regulations and provide that kind of insight. And now the tools are there to enable you to bring that unstructured data together for predictive models and even from a Gen AI perspective, the ability to leverage that and tap into it.

For me, the interesting thing about untapped gold mines in the banking industry is when I'm going back to the basics. So, when you look at the hyperscalers and the models that they're training, they're training based on data that they have access to, which is broadly what's available on the internet. And as we've seen multiple reporting on that, they've reached a limit when it comes to data accessible to them.

But I look through my years of experience in this space and other industries. There's an untapped potential for training data locked within the enterprise and things about the industry, about regulations, about the processes and procedures that are being followed, about some of the issues that may have happened in the past, even about systems themselves and how these systems are used, because potentially the SEM systems are used across the financial industry. And that to me, that is untapped potential for augmented models, for custom models, or even for the hyperscalers to potentially leverage that information for financial services specific type of models. I think that's what I'm seeing.

So, Gaby, when AI agents operate autonomously, what's the number one governance concern? We get a lot of questions about this. We have orchestrated agents that are growing almost exponentially sometimes. So, what's the number one governance concern from your perspective?

Gaby Martin:

I hear a lot about liability and who's going to be held responsible for the decision that the agent is making. And I think it's a pretty simple answer. It's you, right? Yes, you're using a model provider's model, but you are the one that are developing on that model. You're passing it the context, you're the one that's adding the governance around it. You should be the one that's testing it to make sure that it's having accurate results. And at the end of the day, you know, if you're not checking all of those things, it can lead to reputational damage. You have regulatory liability, lead to some financial losses. So, I always, and we talked a little bit about this earlier, recommend starting with that human in the loop, so that you can trust that the model is producing those outputs that you are expecting, track it over time. And yes, it's more of an investment up front, but that investment is worth it, so that you don't go through some of those things I talked about earlier, and then you move towards true automation.

Frederic Miskawi:

And we talked about the copilot in the cockpit, and it's kind of connected to that concept. But is there a point where keeping humans in the loop becomes kind of a false sense of security?

Gaby Martin:

Are you saying that humans are just not really reviewing what the AI is doing, and they're just approving it?

Frederic Miskawi:

I'm starting to see trends that frankly worry me, especially with SDLC acceleration. When you have an agentic ecosystem that's producing a hundred thousand lines of code in a day, humans are not able to review every line of code. And we're getting to a point where it's just not realistic. But we still have processes and procedures around manual reviews and checks and series of checks and balances. And sometimes it feels like perfunctory where we're going through the process, but the value is not necessarily there. Are you saying the same thing?

Gaby Martin:

I agree with that, but I would still push on, even with the example that you gave, if someone is using agentic AI to code, maybe they're not reviewing all the code before they press submit. Normally, there is another step in the review process, right? Where before things get pushed to prod, a manager has to do a code review, and they have to review all that code. So, I don't think it's just one human in the loop review that's going to be able to save us from some of these things that might happen down the line, but there's more of a process that needs to be in place before something even gets shown to the customer or used internally. There have been multiple checkpoints and multiple people saying, I trust this and I put my name on it, and not just one person being held responsible for that.

Chapter 7: Leading long-term value creation in an AI-driven enterprise

Frederic Miskawi:

Because for me, that is a governance problem. We're looking at the work that's being handled, produced by these agents, whether it's on the business side or in IT. And we want to design human in the loop, but we want to do it in a way that provides value. And the more capable these systems become, the more you need to get to the higher level of abstraction. Where instead of reviewing every line of code, you're reviewing the logic. And instead of reviewing the logic, you might be reviewing the specs associated with what gets built. And I think that's a big governance concern, at least for me.

We need to make sure that as we're deploying the systems, deploying the value, validating the value, that we can properly design human in the loop.

Gaby Martin:

Agreed.

Frederic Miskawi:

Let me shift over to the mindset of a CEO or a C-level exec, and especially the mindset of a CEO that has to lead that kind of transformation through an agentic bank. What should that mindset be? What would be the guidance that you provide to a C-level exec that is facing that kind of transformation and that has to navigate it, that has to lead it?

Gaby Martin:

Yeah. One example I keep coming back to when people talk about the investment it takes for AI is if you think about electricity and the investment it took to get to that point. And was actually using electricity cheaper than lighting a candle in your home? It wasn't. But what did that do for society down the line? It enabled so many innovations. It changed the way our society functioned. So, I think that's how we need to start viewing AI at every organization, is to stop focusing on these small tasks that need to be automated. And by automating this one task, are we going to be able to save costs? Maybe when you first start this investment, it's not going to be necessarily cost-saving. But as you get down the line, two, three, four years from now, you are going to remain competitive in the market. You're going to remain revenue-generating. And you're really chasing that transformation and the outcomes versus being so task-focused and cost-focused. So, you know, think about those outcomes that you're trying to achieve and redesign the way you measure your employees so that they start thinking that way as well.

Chapter 8: Redefining how humans and AI deliver competitive advantage

Frederic Miskawi:

Thank you, Gaby. And that will be probably the best last word that we have on this topic for this podcast today. So, we started this conversation talking about our ROI. We talked about the transformation that is happening in this space and the role of the individuals that are doing this migration and the leaders involved in taking the decisions.

I think what became clear is agentic AI isn't just about efficiency, it's about ownership, decision making, speed, trust. It's about, I think, taking a leap of faith. And what I've seen, and I think what you've seen as well, Gaby, is that the real shift isn't technological. It's leadership, it's human. The banks that experiment carefully, I think will improve. The banks that move strategically, they will compete. But the banks that rethink how value is created with humans and AI working together as we address today, they're the ones who are going to be leading. And in the next five years, we won't be asking who adopted AI. I don't think anyone will care. We'll be asking who had the courage to build an agentic bank first. So, Gaby, thank you for the insights. And to everyone listening, the future isn't waiting. And I look forward to speaking with you in future podcasts.

Thank you, everyone.