In the second part of this episode of CGI's From AI to ROI podcast series, host Dave Henderson, Chief Technology Officer at CGI, continues his conversation with John Davis and Victor Foulk — moving from strategic frameworks to concrete examples, emerging technology and a closer look at what the future of software delivery may hold.
From a government agency that built a common vulnerabilities and exposures (CVE) platform in five days, to agentic AI swarms tackling decades of legacy tech debt, to the convergence of AI and quantum computing on the horizon, this episode brings the big ideas of Part 1 into practical focus. Together, they debate whether we are headed for a post-agile world, what meaningful human oversight of AI teams looks like in practice, and what action listeners should consider taking now.
Key takeaways from the episode:
1. As build costs drop, the addressable problem expands.
Work that would never have made the backlog before — because building it would have cost more than it delivered — is now worth doing. A government agency built a working CVE vulnerability mitigation platform in just five days, something that would have been unthinkable even a few years ago.
"The cost of doing things has now come down so much that it's worth doing. The amount of work, potential work, addressable work has gone up," says John Davis.
2. We may be at an inflection point where greenfield beats modernization.
Agentic AI can now analyze legacy applications, recover lost business rules and generate a solid requirements foundation, making it possible to take a greenfield development approach based on well-documented legacy knowledge rather than continuing to patch aging code.
"We've passed the point where incremental modernization is cost effective. It's actually more cost effective to maintain a good requirement set and generate net new applications than it is to maintain legacy apps," says Victor Foulk.
3. AI handles complexity, but not missing context.
AI handles complexity well when strong documentation exists. When that context is missing, the challenge is not complexity itself but a data and training problem. Recognizing this distinction helps leaders determine where AI can be applied effectively.
"If it's a complex domain, if it's a complex code base, but there is excellent documentation about that domain or that code base, then AI can actually do a really good job. If there isn't, it's almost like a training and lack of data problem," explains John Davis.
4. A post-agile future is emerging, but human oversight remains essential.
Spec-based development and agentic AI swarms are already changing how software gets built, pointing toward smaller, more efficient teams and asynchronous delivery. At the same time, governance structures that enable human oversight and trust are just as important as the technology itself.
"We've got to be able to provide human oversight of these systems in order to continue to trust the outcomes. And so I think there's going to be this amazing creative tension there as we continue to evolve how we do this business," says Victor Foulk.
5. You don't have to live on the bleeding edge, but you need to stay curious.
With new announcements emerging daily, the pressure to keep up can feel overwhelming. The sound advice for most organizations is to focus on mission outcomes and ROI, apply the technology at a pace that fits their context and form their own informed opinions by using the tools rather than following the noise.
"Don't assume that the thing that was 12 months ago, 18 months ago is how it is today. Go and get involved, use the tooling yourself and make your own opinion," says John Davis.
Learn more and subscribe
Explore more episodes of From AI to ROI and learn how AI is transforming enterprises and government organizations. Visit CGI’s main AI page for insights, resources and updates.
Read the transcript
- Concrete examples: How is AI delivering real outcomes for clients today?
-
Dave Henderson (29:33)
Yeah, great stuff. I'm going to jump to some of the more fun stuff. Well, actually, it's all fun, right? The joy of my job is getting to really traverse the company and work with people like you two, as well as your teams and seeing a lot of the innovative things that we're doing with clients. Can you guys talk about some concrete examples of how you're seeing the technology applied in ways that's delivering real outcomes and results for clients in your spaces? Who wants to jump on it first?John Davis (30:13)
Yeah, we've talked about the broad usage and shorter cycles for alignment building and increased build speed. But I also just wanted to talk about very concrete trends that I'm seeing across a few clients, which is I could almost have introduced it in the business section at the top, right? Because it's building their own tools, where the cost of building their own useful tool to accelerate their development has now come down so much that they are doing that themselves. So there is a government agency that I was with a couple of weeks ago and only in only five days, effectively using Codex to build a CVE mitigation platform. So this is something that is, you know, taking all of this, the vulnerabilities that we're aware of is effectively increasing visibility of that and then allowing somebody to say, actually, I want to go and mitigate that. I want to go and fix it. And then using Codex on the backend now to say, right, I want you to go and basically download the repo, see how you would fix that vulnerability, go and do it. And then I will effectively accept that and have that change pushed. Now, like a few years ago that would be…Dave Henderson (31:34)
I have to stop you and I have to ask you what is a CVE?John Davis (31:39)
Ultimately, it's a common vulnerability, right? So it's a well-known global identification for a known risk often in a package that you are using and you need to often mitigate that by upgrading or changing your code in some way.Dave Henderson (31:55)
Excellent.John Davis (31:56)
Yeah, so if you think a few years ago,It would never be on the backlog because building that thing would be too much of a distraction to the team who were cracking on with delivery. Whereas now the cost of doing that means there's come down so much that it's worth doing. And that's why I could have spoken about it in the business section, because that's where I think from a business and product point of view, they'll be saying, the amount of work, potential work, addressable work has gone up. The cost of doing things has now gone down so much that it's worth doing.
And we're seeing that, I guess, in this example here in a concrete team doing something that's adding value to their delivery, but it equally applies at the business and product level as well.
Victor Foulk (32:41)
You know, it's amazing that you mentioned the vulnerability mitigation, zero-day exploits, CVE analysis as a starting point for innovation. We saw the same thing. In fact, that use case was the start of our entire AI-enabled IT modernization practice, where we're able to do what you were just describing. We identify a zero-day exploit. We identify something that pops in the CVE database, and you go analyze the code base to identify whether or not that vulnerability is present. And if so, how do you mitigate it? As a former federal CISO, I can tell you that vulnerability management is the bane of our existence. It is a never-ending source of work. And it's amazing that we have these capabilities, especially when you can just make it bespoke, to go off and really reduce the workload and increase the throughput.
- Legacy modernization vs. greenfield: Where does AI deliver the most?
-
Victor Foulk And what I'd add to that is I think that we're really at a point now, as John, you mentioned the cost, we've passed the inflection point where I think incremental modernization of our applications is cost effective.
We've passed the point where that's cost effective and it's actually more cost effective to maintain a good requirement set and generate net new applications than it is to maintain legacy apps.
Dave Henderson (33:56)
Are you willing to make that a universal statement, Victor? Or is that for some cases?Victor Foulk (34:07)
We're in the process of proving it right now. We're in the process of proving it right now because we are confident that that is the case. And so when we talk about, I won't name the product, but when we talk about very large scale enterprise applications that we deliver, because we're not just a systems integration and consulting company, we actually have the expertise in building really big products too, we’re testing that hypothesis on a very, very, very big enterprise application.We're confident what the outcome is going to be, and we're going to learn a lot of lessons. But what we're seeing is that nuance between greenfield development and legacy IT modernization. Greenfield's a whole lot easier right now. And where we're seeing the ability to go from legacy application to net new, is in being able to leverage these agentic AI swarms in our case to be able to analyze legacy applications or an existing IP stack and derive requirements, firmly embedded in a deterministic knowledge graph, and be able to do the documentation, dependency mapping, risk identification, and take all the guesswork out of the modernization. So essentially what you're doing is you're taking that legacy IP stack, that legacy tech debt, and you're creating a really solid set of new requirements and guidelines, and then taking the greenfield development approach on really good requirements, really good documentation and building a quality net new app. And that just taking a legacy tech-debt ridden application, which, every enterprise we deal with has some, right? And that's one of the most common use cases that we've got across our client base right now. But you can use that to go from version 1.0 of a particular application to version 2.0 of an application. And it's very repeatable and it's very efficient.
Dave Henderson (36:10)
Yeah, John, any insight, counterpoint to that?John Davis (36:18)
I think I was talking to someone the other day about ultimately the suitability for AI for different types of project. And we were talking about, you know, like a matrix of risk, how safety critical is it, complexity. And one of the things I find interesting is when we say complexity, that's a big term, but unpacking it, sometimes from an AI perspective it doesn't matter if something's complex, if it's well documented. I don't mean at this point even necessarily the requirement. It's just AI is very good at complex things. If there was a lot of, if it's a complex domain, if it's a complex code base, but there is excellent documentation about that domain or that code base, then AI can actually do a really good job.If there isn't, then yes, it's not in a way, it's not a complexity problem. It's almost like a training and lack of data problem.
So I guess I would probably be a little bit more caveated on there. There are several levers that we have to look at, but it is quite common now, obviously, that we're at a point where generally it would be interesting to find where we're not using AI. And there's some obvious candidates for that in space, defense, intelligence, etc.
But yeah, think having a view of a matrix where you look at all these different lenses and you understand when it should be used and when it shouldn't is really interesting.
Dave Henderson (37:40)
Yeah, I think it's fascinating, And I think we're absolutely going on the leap there. I talked to a client yesterday talking about that they have 1,500 legacy applications in their enterprise, a very large, complex telecommunications company. And I think that the question is, how are we going to modernize all of this? There's a lot of business requirements. There's a lot of that buried deep within that legacy stack, where the people who understood why those requirements were implemented are long gone. And so there's a tremendous amount of value in being able to draw that out and be able to evaluate it because it is buried deep within those apps.Victor Foulk (38:45)
I can't underscore that enough, Dave. Every client I've engaged with, even just in the past four weeks, all 10 of the clients that I've engaged with, every single one of them has in their ecosystem that application you just described, right? Some really important, still churning and burning, functioning application that either one person in the company knows about or nobody knows about. There's no documentation, there's no comments in the code.And I think that one of the incredible things that we are seeing about the tools that we're deploying is systematically, we can analyze those applications. We can recover the business rules that are embedded within because AI can unravel that complexity at scale. And we can actually articulate information that was lost as personnel turned over and give hope that modernization can in fact occur without starting completely from scratch.
- What are you excited about: The future of AI in software delivery and will we see a post-agile world?
-
Dave Henderson (39:41)
As we've been discussing, everything is moving very, very quickly. Every day, every quarter, there's a new model version, there's a new update from one of the big AI players and even smaller AI players are coming in to disrupt. But it feels like we're waking up, you know, certainly to new announcements every day. So I'm excited about a lot of these things, but what are you guys excited about?John Davis (40:08)
Yeah. I guess I would split it into two areas. There's the tech, like you say, the CTOs care about the tech and there's just so much going on there with ultimately, you know, the agentic swarms, the spec-based development every week. There's some new way of doing that.What I'm excited about is that move from vibe coding to spec-based development, isn't just about moving it to enterprise scale. It's about when you're vibe coding, the human is the constraint because ultimately they're constantly having to prompt.
Whereas when you can get to a clear specification, then you now really are into that kind of asynchronous product development where you do a good job and now an agent or more likely agents are going off overnight. This seems like a, you know, a model we're familiar with in a certain geography where you, you, you put in a task, you come back the next morning and it's all done, but that doesn't. Yeah, but it doesn't work.
Dave Henderson (41:11)
No batch agents.John Davis (41:16)
Yeah, exactly. And it doesn't work with vibe coding. It needs a spec-based development approach. So that's the first thing I'm really excited about is just this.I mean, somebody commented to me that it's like keeping up with the number of frameworks that they're all named after like craft ales. Like there's just all these weird names for these things and it's exciting, but there's too many of them. So it needs to start now, getting down to a smaller number that we can kind of get behind and have as best practice.
And then I guess the second thing that I'm excited about, I don't have the answers is what this, this feels like a real inflection point, like around almost like moving to a post-agile world, right?
If we think about agile and all of the ceremonies and all of the way that the deliverables are defined, they're generally designed to pass from human to human. And we need to be really rethinking what that software development operating model looks like in this AI world. And it is going to be very different. And I have some hypotheses, right? You know, I think we'll see an AI product lead, somebody who's very kind of AI savvy, who's able to drive the direction and then an AI tech lead who's kind of someone who's obviously looking over all of the agents, keeping a bar on quality, security. And then I think we'll bring in other roles, you know, like it might be a design expert if it's a real digital product that you're building and they need, but what they're bringing in those prompting skills to get the best design out of the agent, not necessarily doing the pixel by pixel.
And the side effect from that is we always think of AI as an enabler, but smaller teams themselves are an enabler like one going all the way back to the start communication, keeping everybody in large programs up to date with what you're doing, decision-making everything is hard with these larger teams and what I'm really hoping we'll see is lots of small teams. So not less people, just smaller teams adding more value at a more efficient way So that's kind of the two areas that I'm excited about.
Victor Foulk (43:25)
So I want to not push back.John Davis (43:29)
Push back Victor, be on your feet.Victor Foulk (43:29)
I have a slightly different view based on what we're doing. What we've actually found, and you're probably right, we're going to get to a post-agile world, right? But what we have found is to enable those smaller teams in our work, those smaller teams are leveraging swarms, as we've been saying, right? With the hyper-modernization efforts, we've been leveraging hundreds of agents, right? With different roles and different characteristics.And in order to provide the human oversight, it's not only good for the swarms’ efficiency to model them after human teams, but it's really helpful to have those agents operating in a way that humans are familiar with, developing certain artifacts sprint by sprint so that we can provide that that oversight to ensure that we didn't just put a bunch of requirements into an AI black box and we got an application out.
We can actually go in and assess how the requirements, you know, rolled through the system, we can identify where prompt bottlenecks happen to be, where issues may exist in our prompt harness. And it's in a structure that we're familiar with and we're getting incredibly efficient results that way. Slightly higher token consumption because we've got that that human-like redundancy built in within the swarm. Will that evolve out? Maybe, probably.
I don't know that it goes out entirely, but we've got to remember, we've got to be able to provide human oversight of these systems in order to continue to trust the outcomes. And so I think there's going to be this amazing creative tension there as we continue to evolve how we do this business.
John Davis (45:14)
I think it's really interesting because I think they'll find a sweet spot. If any of you, as I'm sure you do, watch the Instagram reels about software development, the classic one is the developer saying, shall I just make this one line change? No, you need to come to the estimation meeting where you can tell me what t-shirt size it will be. And then we need to have a retro and it's where the size of the work is tiny compared to the size of the bureaucracy. And what you're saying there is that there is a sweet spot. If you just make the change or AI makes the change, we have no traceability, we have no governance, we have no prioritization, but we can't just build bake in the current bureaucracy. We need something new. What's exciting is finding what that looks like.Victor Foulk (45:55)
Exactly.Dave Henderson (45:58)
Yeah, and I think I combine a lot of these things when I think about, yeah, small teams, right? But in the same sense that we talked about agile is really about having smaller, multidisciplinary teams, right? So you have the business expertise tightly coupled with the technical expertise, right? Because you cannot do this without both, right? A group of technologists will not be able to create anything necessarily valuable to a business, right? And a group of business people will not have the expertise, nor is it their job to understand how to use all of these tools, right, in order to create the right outcome. That's going to be our job, that's what we do, but we need to make sure that those multidisciplinary teams are thinking about what is the post-, I like the post-agile world, right? And I think it will be the smart people that are working in those spaces that will create that future.That's what we want, right? We want them to be enabled to create that next iteration and not be stuck with something that used to work really well, but now that's being completely disrupted. We kind of want to disrupt ourselves and we want our teams to disrupt themselves.
- Closing: What’s next, quantum, and advice
-
Victor Foulk (47:06)
Absolutely. And Dave, you mentioned it, right? We've actually called it out a couple of times in this discussion, and it's worth saying again, is AI is changing everything except the marriage between domain expertise and the technology. That domain expertise continues to be the secret sauce that makes any solution feasible.And when we talk about what's exciting about the future, especially as you open the question, Dave, you're like, there's a new thing every day. You open up your email every day, there's something new. The sheer stress and sense of dread that comes from that information flow is enough to make any CTO just walk away, right? And one of the things before I say anything about what I'm excited about, I want to highlight, as organizations grapple with the velocity of technology evolution, it's important to realize you don't have to live on the bleeding edge. There are those that will, but you don't have to. And in fact, in most mission domains, the sound advice is focus on mission outcomes and return on investment. Apply the technology to achieve those things, especially if you can achieve them in a program that's self-funding, where you can do labor costs or other cost reallocation, leveraging those efficiencies to do other things.
The technology will come at the pace it needs to come for your organization. Now, you should have somebody that's tracking it and continuing to bring innovation and ideas in. But take a deep breath. You don't necessarily have to live at the bleeding edge.
And so what I'm excited about is a little bit bigger than the software development lifecycle, but we've had some recent developments that are enough to lift the hair on the back of your neck. You know, the agentic AI development processes that John and I have been talking about, they're real today. They're really real today and they're going to continue to evolve and organizations are going to continue to implement them and adapt and catch up. And the reality is that if everybody stopped developing on the frontier side, we still have seven years or so worth of technical innovation out there that has to systematically be worked into industry. There's no way to keep up with the bleeding edge.
But one of the industry signals we've been looking for is when the major players in quantum start investing real capital in certified fabrication facilities. We're seeing that. And so what that tells us is that the timeline for quantum is accelerating. And when you couple quantum and artificial intelligence, we've got some game changing times ahead of us.
And, you know, getting to a 200,000 qubit quantum computer in the next couple of years might actually be a real thing. And so what that means, like if you, if you, if you look at what quantum means, it's really the potential to change the fundamental economics of decision-making, which we're using artificial intelligence for today at scale, right?
Quantum changes the economics of that. And a lot of the hardest problems that we deal with in the government regulated industries, they're not generate text problems, they're not generate code problems. They're like optimization, search, correlation, risk and uncertainty problems that are really big crunchy math problems, right? And being able to couple artificial intelligence with that problem solving capability, it's going to give us the ability to make sense of information and propose options that we've never had before. And I'm really, really excited about the convergence of those technologies. And I'm predicting that they're just years away, single digit years away.
Dave Henderson (50:47)
Sounds like a great topic for our next podcast, a great cliffhanger. But John, any final takeaways from you? Victor, that's a great close out, great insights and appreciate all of your discussion today.John Davis (50:47)
You heard it here first. Yeah, I think I feel day to day, I'm almost starting to see a bit of a culture war, the pros and the against. And my takeaway for people would be like this fire hose of LinkedIn and newsletters and things, if you can just try and actually experiment as much yourself, like take some of these tools, play with them, learn what they can do.Especially don't assume that the thing that was 12 months ago, 18 months ago is how it is today. And if you don't think it works for you, that's absolutely fine, but make your own opinion of it. Don't just be believing all of the LinkedIn hype. Go and get involved, use the tooling yourself and make your own opinion.
Dave Henderson (51:47)
Excellent. All right, great, great takes there to end our session here. So thank you all very much for joining us and that'll conclude this edition of the From AI to ROI podcast.