In this kickoff episode for Season 2 of CGI's From AI to ROI podcast series, host Dave Henderson, Chief Technology Officer at CGI, is joined by John Davis and Victor Foulk to explore how AI is transforming software delivery and why this shift is a board-level conversation, not just a technical one.
Drawing on real-world client engagements and CGI's own internal adoption, the discussion examines where the true return on investment lies beyond code generation. They explore how organizations can avoid building the wrong things faster, where AI delivers the greatest impact across the software development life cycle (SDLC), and what a future of AI-powered delivery may look like in a post-agile world.
Together, they challenge common misconceptions, share practical examples and outline how leaders can position their organizations to capture sustainable competitive advantage.
Key takeaways from the episode:
1. AI in the SDLC is a business strategy conversation, not a technical one.
The question is no longer whether AI improves productivity: it does. The real question is how organizations choose to reinvest those productivity gains. Leaders who channel them into delivering greater customer and business value, rather than focusing solely on cost reduction, will create long-term advantage.
"Ultimately that is both a business and product kind of decision for what to do with the productivity, as opposed to a technical one for how to deliver it," says John Davis.
2. Most large programs have a waste problem, not an efficiency problem.
In large scale initiatives, development speed is rarely the primary constraint. Communication gaps, misalignment and slow decision-making are often the true bottlenecks. AI's greatest opportunity early in the life cycle is eliminating waste — enabling teams to build the right thing before accelerating delivery.
"If you've ever done any value stream mapping, it's about finding the waste and removing it, and most large programs have a waste problem, not an efficiency problem," explains John Davis.
3. Vibe coding does not equal enterprise-scale delivery.
AI-assisted rapid prototyping can accelerate ideation and alignment. However, enterprise-grade software still requires requirements traceability, testing discipline, governance frameworks and strong data controls. Without these foundations, organizations risk delivering vulnerabilities at speed.
"Integrating AI into all phases of the software development life cycle really does raise the bar on disciplines. We can achieve incredible efficiencies, but we can't do it without the organizational discipline that we've known and lived with for all of our consulting lives," says Victor Foulk.
4. The biggest value is expanding what's now possible, not just doing the same things faster.
When the cost of building solutions decreases, the addressable problem space expands. Teams can experiment more broadly, test more hypotheses and pursue initiatives that previously would not have justified the investment.
"If that bottleneck is removed, the product manager is thinking, I'm not going to A/B test this — I'm going to A/B/C/D/E/F/G test this. It's a whole new mindset," says John Davis.
5. Domain expertise remains the secret sauce — AI doesn't change that.
No matter how quickly the technology evolves, the combination of deep domain knowledge and AI is what makes solutions viable at enterprise scale. Organizations that treat AI as a replacement for expertise will struggle; those that use it to amplify expertise will gain a lasting advantage.
"AI is changing everything except the marriage between domain expertise and the technology. That domain expertise continues to be the secret sauce that makes any solution feasible," says Victor Foulk.
Learn more and subscribe
Explore more episodes of From AI to ROI and learn how AI is transforming enterprises and government organizations. Visit CGI’s main AI page for insights, resources and updates on AI-powered strategies.
Read the transcript
- Introductions
-
Dave Henderson (00:00)
So hello, I'm Dave Henderson. I'm the Chief Technology Officer at CGI and your host to kick off season two of our From AI to ROI podcast. We'll be giving you a real world look at a topic that's really been dominating the headlines and certainly a lot of my C-Suite and board meetings with clients: AI in the software development/delivery lifecycle.
This often starts as just a tooling or productivity discussion: how much efficiency can you get, or how are we going to make our developers faster. But really, I think the conversation goes much more quickly to more strategic discussions. What problems can you now afford to solve? How can you make sure you're not just building the wrong things faster? How do you future-proof your competitive advantages with this technology? And from what I've seen, anyone responsible for value and outcomes for their organizations and customers should be digging into this topic, not just the technical experts.
So in the podcast today, we will start with an overview of the actual value and ROI. We're going to discuss some concrete examples of what we're seeing in our client projects and within our own company. And then we're going to wrap with key takeaways and what should be on your radar for the future.
As I started out my career as a software developer and I'm now the CTO, this topic is one that's near and dear to my heart. And I'm really excited to be joined today by John Davis and Victor Foulk, two leaders and practitioners who live out in this transformation world every day with our clients and inside CGI. So John, Victor, could you guys give everyone a quick intro before we dive in?
John Davis (01:40)
Sure, thanks Dave. Like you say, I’m John Davis. I'm a Vice President Expert in AI Software Engineering and that involves helping CGI go on that transformation and also giving advice to our clients as well on the same topic.
Dave Henderson (01:52)
Great. Victor.
Victor Foulk (01:53)
I'm Victor Foulk, Vice President at CGI Federal, and I lead our emerging technologies practice, which means I oversee research and development across fields like artificial intelligence, advanced analytics, quantum. We help translate those capabilities into things that actually achieve mission outcomes in our client environments, which means a lot of my time is spent bridging the gap between cool demo and enterprise grade delivery and even more time spent helping our leaders make good decisions about where AI is truly material versus where it's just noise.
- What is the ROI beyond productivity gains, and why should leaders care?
-
Dave Henderson (02:29)
Excellent. Well, welcome, gentlemen. So let's jump right in. So my first question is really one that I alluded to in the opening, which is, you know, there are a lot of headlines about productivity and how things are evolving very, very quickly.
Dave Henderson (02:48)
But you guys both have hands-on experience. You're working with teams that are working directly with clients and internally as well. How should leaders be thinking about value in ROI when it comes to AI in the software delivery development or the SDLC, right? It really has a big impact, but how should people be thinking about that?
John Davis (03:10)
Yeah, absolutely. I think it's really interesting that we've kind of moved on from some of the debates about will it be improved productivity and now we're having the conversation effectively at the business level about what to do with that productivity. And that is why it is a business-level conversation and not a technical one, because you've got this productivity. What do you do with it? You might say, well, actually, I want to make some cost savings. But then you realize that you live in a capitalist society and you look at your competition and you see that they're using that productivity to basically bring their product to market faster, investing into features. And if you don't do the same, you're going to fall behind. And even if you sit in the public sector and you're thinking, I'll take some savings. Well, ultimately the citizens are saying, we'd quite like that value as well. If you can turn that productivity into value as opposed to a cost saving, then we'll take that. So ultimately that is both a business and product kind of decision for what to do with the productivity as opposed to a technical one for how to deliver it.
Victor Foulk (04:15)
You know, I like how you frame that, John. If we reduce this to a concept of AI helps developers type faster, most of the value is missed, right? And I think for executive leadership, the question is really what new business outcomes or what new technical outcomes are feasible when we've reduced the cost of analysis, synthesis, and iteration. In our business, I know that we see ROI show up in a couple of different ways, tactically.
A reduction in cycle time and decision speed allows us to compress the distance between an idea and a requirement and a testable outcome. And what that means is a reduction in risk, not just cost. And I think that we often miss how these tools and these capabilities and the impact they have on the software development lifecycle impact risk, not just a dollar figure. And then we're expanding the feasible backlog. Like John, what you were saying, when we reinvest those savings, we tend to not call them savings, we call them labor cost recovery or something like that, because we're going to spend the same amount of money. We're just going to reinvest that in achieving more and doing work that was unfunded before.
And then back on the risk topic, I think quality and resilience is another big ROI factor that's a part of the conversation. When we have fewer defects, when we're able to leverage artificial intelligence within the software development lifecycle to improve code quality or detect issues faster, or when issues arise do root cause analysis faster, we're reducing cost, we're reducing risk, and we're delivering mission outcomes faster. And I think really the tagline there is efficiency is not just a lever, it's a strategic capacity increase. And if we're thinking about it that way,
It helps us prioritize how we apply the technology with a focus on return on investment.
John Davis (06:14)
Exactly. And that's not just at the business level, it's at the product level. Like you said, you talk about reducing risk, but ultimately the product manager isn't thinking, well, I'm going to do everything I used to do. And it's going to be a little bit quicker. They should be thinking, even if I'm an excellent product manager, I don't have a crystal ball. Right? So I need to experiment. need to test hypotheses. And beforehand, the bottleneck for that was almost like my development capacity to see how many ideas could I test. Whereas if that bottleneck is removed, the product manager is thinking, I'm not going to A B test this. I'm going to A B C D E F G test this. It's a whole new mindset, right? It's using the capacity in a very different way, which is both reducing risk because you haven't, like you say, maybe you haven't picked the wrong idea, but it's also increasing opportunity to find value by being able to experiment and test more hypotheses.
- How are people reacting to these changes and how is it changing the way we work?
-
Dave Henderson (07:12)
You know, I'm going to follow that thread a little bit, because, you know, human beings are funny in that we don't necessarily adapt to that kind of change, right? If I've been doing something the same way for a very long time, right? And I know how to get something done. I want to continue to do it that way. What are you guys seeing, in terms of, kind of having professionals step back and reevaluate the way that they're approaching, you know, solving these problems, John, going from a kind of a stepwise set of thinking to more of a parallel thinking about hack and throw a bunch of stuff at this at once. What do you guys think in terms of just the human ability to absorb and adapt to this change?
John Davis (07:59)
Yeah, I think it's fair to say that we have to realize it's mixed. So, some people are really looking at what's possible and they're so excited that they're thinking about how to use that. So I guess in the example that we were kind of just talking there about experimentation, I'm seeing product managers working with their tech teams with some AI tooling to be able to get to almost like prototypes much faster. And they're finding a new way of working.
They're like, well, I don't want to task you and you come back in two weeks and I tell you, unfortunately, that's not quite what I meant. I want to work in almost real time with you where you're able to use these AI tooling to create almost like rapid prototypes almost instantly. And with that I'm giving you almost real time feedback.
And then what tends to happen is like, how do you amplify that ultimately? How do you make sure that it's not a stick? Do you need to be telling people you have to work in a different way? You want people to see that example there and we go I want to do that, how do I get some of that, and I think that's what I'm kind of seeing inside of CGI right is how do we take the success stories amplify it to everybody so that people are you know they're seeing that carrot like how do I get that because not everyone has the time and the bandwidth to be able to be doing that exploration and trying those new ways of working.
Victor Foulk (09:24)
Yeah, everything John said and I'm also seeing it. I mean, let's be honest, right? We're seeing an amplification of the workforce skills gap that we predicted years ago was going to happen. It is happening. You have your early adopters and you have your thought leaders that are leaning into the technology and they are adapting and capitalizing on the nuanced changes in the way that we work.
But at the same time, we have very focused efforts to try to make sure that the rest of the workforce is in fact changing too, to be able to capitalize on the technology.
Beyond our own workforce, we're also seeing this change behind behavior, right? Where we used to get in the public sector, we used to get a request for information with some high level statement of work requirements in draft or something, and we'd write back some beautiful pontification of a vision, we’re not necessarily doing that anymore. As John mentioned, we're going to take that and we're going to go line by line through those requirements and quickly churn out a visible interactive prototype or proof of concept so that when we talk with a client, you know, we're not just talking about requirements on a page and words, we're experiencing an immersion in a demo, right? And we're able to do that at incredible speed.
Now we're not talking with those demos, that they're enterprise grade delivery.
But being able to prototype that fast is fundamentally changing not only how we work, but how we collaborate with our clients.
- Most large programs have a waste problem and not an efficiency problem, and not everything can be solved with vibe coding.
-
Dave Henderson (10:55)
Yeah, now those are great points. I think as you guys are out there and I certainly am engaging with folks and spending a lot of time trying to talk about some of the misconceptions about, you know, AI and what it can and can't do. And am I going to be able to now vibe code an ERP system? Can I, can anybody do anything now with AI? And I always say, well, is that anything going to be good? How do you know it's good? And how do you now start to build something? So it goes back to some of the things that you guys have talked about. And I think you touched on it, John, about, development speed as the bottleneck, right? But really development speed, development has been getting faster for a long time. Now it's taken a bit of a warp speed jump here. But is that the hardest thing that we do development? I don't think so.
John Davis (11:58)
Spoiler, no.
And that's what we kind of see is that there's a lot of narrative around, you know, start a lot of this started obviously with code generation tooling, and that's where the productivity gains are. But if any of our listeners have ever been part of a large program, I bet they don't think it didn't go very well because the developers didn't go a bit quicker, 15%, 30% quicker. That's not the problem. The problem is especially in these larger programs it's just hard, communication, decision-making, alignment.
And I think it's really a conversation about efficiency versus waste. Right. So a lot of the kind of coding is around improving the efficiency. But earlier in the cycle, it's about removing waste. You mentioned it in your intro. If we don't get this right, then it's just building the wrong thing faster.
So what we need to be using is these different loops that effectively we've already talked about it, rapid prototyping, is not just we did it a bit quicker. It is getting to alignment so that when you go into the build phases, you can have that acceleration with confidence that you're building the right thing. So when we basically look at AI and think about how can it help with decision making? How can it help with architectural decisions? How can it help with design decisions? These ultimately, they remove the waste from the system, right? And then once you go into the build, then yes, of course you want that to be more efficient. But if you've ever done any value stream mapping, it's about finding the waste and removing it and most large programs have a waste problem not an efficiency problem.
Dave Henderson (13:44)
Yeah, Victor, you work in some pretty large environments that talk about, certainly concerned about waste and efficiency and how to deliver more faster in the federal space.
Victor Foulk (13:45)
Yeah, absolutely. And before I even jump into the concept of misconceptions, I’ve got to pile on to what John said. As you know, good coders are important to a successful delivery project. So is a good scrum master. So is a good program manager. And all the things John just laid out are principles for how a good project is run. Turns out they're also how you orchestrate agentic AI at scale, right? It's the same principles.
You run a program like a good program and it works. Now, from a misconception perspective, John already stole my idea. The constraint in enterprise delivery is rarely typing code. That's not never, but rarely. It's not typing code. There's so much work that goes into understanding the problem itself, right? The domain expertise to decompose the problem, getting stakeholders aligned. There's, especially in government, there's ensuring compliance.
And we're rarely taking on applications or programs that stand alone, right? So they all have to integrate into some legacy fabric. That takes work too. And so all of those factors combined, that's a lot of work, right? And it's a lot of work that's way outside of the realm of just coding. And like John said, that's where a lot of waste or complication tends to creep in and slow projects down. Leveraging AI to unravel some of those aspects is where we see applicability in the broader extended aspects of the software development life cycle.
The second misconception that I'd point out, I mentioned it a little bit ago, vibe coding does not equal enterprise scale delivery. It just doesn't. And in some parts of the industry, the term vibe coding is a bad word. And there's a reason, because if you leverage vibe coding without the domain expertise and without the guardrails and principles of good project management, or in that case, AI orchestration, you're delivering vulnerabilities at speed, not good code, right? Enterprise grade software needs things like requirements and traceability. You need testing strategies. You need governance. You need data controls. You need to be able to model the data right. Can you outsource some of that to AI? Can you outsource all of it to agentic AI at some point in the near future?
Most likely, but not today. It requires a team that knows how to manage artificial intelligence, especially agentic AI at scale, and deliver repeatable, scalable outcomes that go beyond just a chat interface and vibe announcement code.
So I think the reality there and the reframe for leadership is integrating AI into all phases of the software development lifecycle really does raise the bar on disciplines. We can achieve incredible efficiencies, but we can't do it without the organizational efficiency or the organizational discipline that we've known and lived with for all of our consulting lives.
Dave Henderson (17:05)
Yeah, great point because in my career, having come up from as a software developer to a P&L owner to SBU president, you're right. When I look back at all of the projects that we've delivered that have had some sort of a challenge, it almost always is in really an ability to understand, hey, did we build the right thing?
Did we have a proper understanding and agreement of what we were building and was that articulated in a way that people could understand and then deliver the right thing? It was rarely, to yours and John's point, from I'm missing this technical expertise. Because, and if we would have just had that, the project would have been successful. I challenge you to go back and find anything in our world, right, in delivering enterprise solutions world that was firmly rooted in our ability to deliver on a technical skill at the same time, right? Really understanding and being able to master those technologies when, and combining that with, with a now a really kind of advanced accelerated way to understand and ideate with a client, with a business.
To me, that's really exciting, right? And to me, that vibe coding just sticks right in there in that ideation phase, back to John's earlier point, right? It's like, hey, you know what? We can now cycle through ideas in minutes and hours, not in weeks, right? And like, okay, we'll get you a wire frame back next week or even in a couple of days, right? Those days are now gone, right? And I think that now we can go so much faster, but it's training everyone to really now operate at kind of at a different speed.
One of the things that you were touching on Victor, I'd like to hear John's point too, companies like CGI, at the end of the day, aren't we really kind of building that trust framework for using this technology? When companies come to CGI, they are coming to us to not to teach them how to code faster, right? In some cases, yes, we can do all that. But at the same time, we're delivering solutions and we want to be able to use all of these tools and maximize the speed, efficiency, and security of those solutions. What's your take on kind of what's the value that CGI is bringing here in companies to work with us?
- How are companies like CGI delivering value?
-
Victor Foulk (19:51)
Yeah, I mean, so I'll start with some of the things that I'm seeing. And I think it's true across the board. Customers don't necessarily come to us to buy a widget. They don't come to us to buy to buy a thing. It's the old adage: If you have a headache, take an aspirin. They don't want the aspirin. They want their headache to go away. And that's what that's really what we're delivering.
They've got external budget pressures, they've got additional mission challenges, and they're looking to solve problems, right? They're looking for mission outcomes and they come to us and see a demonstrated, repeatable and trustworthy history of technology implementation, emerging technology integration, governance, the ability to deliver on mission outcomes at scale.
Dave Henderson (20:43)
Yeah. John, what's your take on really kind of the value equation that we can deliver?
John Davis (20:45)
Yeah, I think it's quite different depending on how we're helping our clients, because sometimes the client, like you say, Victor, they say, I want this outcome. In a way, I am less interested in how you achieve this outcome. I just expect that you are absolutely at the forefront of where innovation meets trust. There were all these frameworks out there and a long lists of agentic swarms of ways of doing spec based development has come out of, you know, like say vibe coding and trying to bring some more structure to that. But the outcome buyer is saying, I just trust you are on top of that and you are using all of that to deliver my outcome for me. Do you know what I mean? That's all I care about.
Whereas we obviously work with clients who also want to work with them on a transformation. We're working with them. We're working with their software delivery teams and that changes everything, right? Because now they very much do care. Like, should I be using B-mad? Should I be using spec kit, should I be, you know, what do my new roles and responsibilities look like? And what they're wanting from us there is we've done the similar work, which is being on top of everything, you know, being able to know what you can trust and what you can't. But now we can filter that and pass it on that value onto them. And obviously say what we have found is this and we've got some hypotheses for what will work with you. So it really, to me, differs as whether it's an outcome buyer or as we often work with a transformational company that is saying help us transform with you.
Dave Henderson (22:24)
Yeah, absolutely. I think, you know, know, I talk a lot about, it's not, you know, we say it's not about the technology, but you know what? I'm the CTO, right? So a lot of it is about the technology, right? So it's, people always look at me funny when I say it's not about the technology, but it is mastering the technical landscape and understanding how all of these advanced tools can be used together to deliver real outcomes back to the real things that are hitting the market much closer than some of the old processes and tools that we had in the past. And so bringing organizations up to speed to leverage these tools, and I'll go back to kind of the human element, and we're a company full of technologists. Right? And so we've got to bring our technology organization along to kind of understand both sides of that coin. We've got to use these tools to better understand the problem space that we're trying to solve for and to work with our clients and use our deep industry expertise. But at the same time, we've got to be really good at using these tools. We need to be masters. We need to still have that engineering mindset, don't we?
Victor Foulk (23:48)
Absolutely.
- What are lessons in how we have transformed our own organization?
-
Dave Henderson (23:50)
A lot of our clients are now trying to understand how do they transform their organizations and we've already been there, right, as a company. And so that was a great, know, a big benefit of our learning, right, was being able to see how we can transform our own organizations and then apply that to our clients and use that to help our clients with the same transformation.
John Davis (23:55)
Exactly, exactly.
Dave Henderson (24:12)
John, you want to talk a little bit about how that's evolved in the UK and what you guys have seen and what's worked?
John Davis (24:21)
Yeah, we tend to talk about it, you know, as a client zero, how have we transformed ourselves from three different lenses? So one is ultimately, it is leadership, it's making sure that the leadership has a strong vision and is prepared, frankly, to put some capital into the investment. If you don't have the vision and the investment, the other two lenses that we'll talk about aren't going to fire on all cylinders. So it starts with leadership vision and investment.
The second one is we don't start centrally necessarily with like, this is how we do everything. We respect that we have a huge number of talented people and lots of them are spending huge amounts of their time actually playing with this, experimenting with it, finding innovative ways. It's that we don't want to take that to the center and just say, we've got the best idea for how we do it.
So for a client zero for us is finding those stories right across all of our partners. Where do we find that someone's done something innovative and basically bring that in and then use that in the third lens which is what we call the lab. Right, and this is now basically productionizing that gem right taking that green shoot of an idea, and saying well maybe we just need to knock the edges off that maybe we need to polish that and write down that best practice and then the role is to amplify it right to actually now tell everybody this is a this is something that you should all be doing so I think from a client zero perspective, leadership, basically extracting creativity and value from all of the partners and then amplifying that through the lab is a thing that's worked well on our client zero story.
Victor Foulk (26:03)
Yeah, and I'd echo that, right? So in the federal environment, you know, we had a decision to make early on, years ago at this point, if we're talking about generative AI, it's old tech now. We had to decide whether we were going to bring in new talent or develop talent, you know, upskill folks that we had. And obviously the decision in my research and development organization was to upskill.
And so what's important that John mentioned, right, is investment. We had the trade space in terms of resources to go work with the technology and apply it to different problem sets and develop the skill sets internally. We did that early on.
And as the skills began to develop with that trade space where we had a little bit of investment, we had the trade space to take risk, then it was a matter of connecting with domain owners across our business, whether that be in human resources or finance or marketing or fill in the blank and talking about the technology, the evangelism and taking problems that exist across the organization and taking additional risk, trying things out, right? In a contained environment. And over time and not a lot of time, but over time, that upskilling process turned into real implementations of technology, solving small bounded problems within each of these domains. And we learned how to do it. And as we did that, we kind of developed the frameworks for how to do partner upskilling, our workforce upskilling activities. We learned how to take this artificial intelligence technology and constructively apply it to various domains and in the process, it's amazing how powerful the coupling of domain expertise and artificial intelligence is. And I don't think it's AI solving a problem by itself. There really is a secret sauce associated with that domain expertise and being able to help in a client zero context, being able to help our internal stakeholders understand the technology and be able to think through the application of the tech to their domain empowered them not only to solve the problem we were talking about then, but then to continue down the stream of value delivery, solving additional problems.
And when you fast forward that, several years into the future, we've defined pretty robust frameworks for how to do this. And we advise clients on how to do the same.
So I think the takeaway from a client zero perspective is if you want to be in a position where your organization has that internal capability to develop net new,
You have to have investment and you have to have the acceptance of risk. You have to have those two elements of trade space in order to operate. And if you're in a little bit more risk averse organization, you really need to be able to understand your partner and alliance ecosystem. You don't have to necessarily build everything yourself, right? If you don't need to be on the bleeding edge, You can work with organizations that have had this client zero experience like a CGI, that can help you understand how to apply frameworks and how to apply this innovation in your domain and create a very quick transformation within your organization.