Dans cet épisode de la série de balados De l’IA au RCI de CGI, l’animatrice Diane Gutiw, responsable du centre de recherche en intelligence artificielle (IA) de CGI, reçoit Nicholas Morel de Google et Raghav Kumar de CGI pour découvrir ce qui alimente l’innovation dans le cycle de vie du développement logiciel et son importance pour les dirigeants.
En s’appuyant sur de véritables expériences clients du secteur ainsi que sur des programmes d’entreprise à grande échelle, ils étudient pourquoi la majorité des projets peinent à entrer en production, comment une culture de gouvernance adéquate accélère l’innovation au lieu de la ralentir et ce que la transition vers l’IA agentive signifie pour l’avenir des équipes et des organisations. Pour faire suite aux épisodes 1 et 2, cette conversation porte sur les percées, les erreurs et les apprentissages des clients et de CGI dans leur parcours respectif dans le domaine de l’IA.
Pour en savoir plus sur l’IA dans le développement et la livraison de logiciels, visitez notre page sur le cycle de vie du développement de logiciels.
Voici les principaux éléments à retenir de cet épisode.
1. Les secteurs les plus réglementés deviennent les précurseurs inattendus de l’IA.
On considère généralement que les natifs du numérique mènent le bal dans l’adoption de l’IA, mais la réalité s’est avérée plus surprenante. Le secteur public, les services financiers, les soins de santé et les services publics évoluent très rapidement, et ce, non pas malgré les exigences strictes de documentation qu'ils doivent satisfaire, mais précisément à cause d’elles. Ces organisations se caractérisent par une grande complexité des processus et disposent d’importants actifs de données que l’IA peut pleinement exploiter. Pour elles, la modernisation est un impératif concurrentiel, et non un simple projet technologique.
« Chaque transfert nécessite une documentation abondante, et cette technologie nous aide réellement à exploiter sa valeur. » — Raghav Kumar, CGI
2. Le véritable obstacle n’est pas la technologie, mais plutôt l’écosystème qui s’y rattache
Lorsque les projets de logiciels d’IA n’offrent pas les résultats escomptés, la technologie est rarement à mettre en cause. Le plus souvent, les lacunes de gouvernance, une gestion du changement inadéquate, des mises en œuvre cloisonnées et un rendement du capital investi évalué sous un angle inapproprié en sont à l’origine. Les organisations qui se demandent comment réduire leurs effectifs génèrent des résultats insatisfaisants. Celles qui se demandent comment générer une valeur accrue avec la même équipe instaurent une approche durable.
« Une telle démarche passe par une gestion du changement et un parrainage. Elle s’appuie aussi sur des balises, une sécurité et une gouvernance adaptées, pour permettre à ces solutions non seulement d’entrer en production, mais également d’évoluer au fil du temps. » — Nicholas Morel, Google
3. Désapprendre est tout aussi important qu’apprendre, surtout pour les employés expérimentés.
L’expérience de CGI à titre de client zéro a révélé une réalité surprenante : les développeurs juniors s’adaptent plus rapidement que les développeurs chevronnés. Ce n’est pas une question de capacité, mais d’habitudes bien ancrées dont il faut se défaire. Les programmes de gestion du changement efficaces misent sur l’accompagnement afin d’aider les équipes à cet effet. Dès que les architectes et les experts ont effectué la transition, le soulagement a été immédiat. Ils pouvaient enfin se concentrer sur des tâches stratégiques plutôt que répondre à la moindre question.
« Les employés expérimentés ont eu plus de difficulté à s’adapter, car leurs façons de travailler sont solidement enracinées après toutes ces années. » — Raghav Kumar, CGI
4. Une culture de gouvernance représente un avantage concurrentiel, pas une contrainte.
Dans le cas de CGI, la culture basée sur la sécurité, la responsabilité à l’égard des données et la gestion rigoureuse des risques qu’elle a bâtie au fil des décennies a accéléré l’adoption de l’IA, plutôt que de la freiner. Puisque les employés comprennent déjà la classification des données, l’utilisation acceptable de l’IA et le traitement de l’information, la transition vers une adoption responsable de l’IA a nécessité moins d’adaptation que dans des organisations qui partaient de rien.
« Cela a simplement permis de créer un produit dont la mise en production était beaucoup plus facile à envisager, puisque nous avions tenu compte en amont des demandes du chef de la sécurité informatique, du chef de la direction informatique, du chef de la direction technologique et de la direction. » — Nicholas Morel, Google
5. L’avenir de l’IA agentive soulève des questions auxquelles nous n’avons pas encore de réponses.
L’IA agentive n’est plus une question d’ordre théorique. Néanmoins, les enjeux opérationnels, éthiques et de gouvernance commencent à peine à être pris en compte. Si un agent commet une erreur, qui est en responsable? Comment évaluer et mesurer la performance d’un agent? Comment concevoir des transferts entre humain et agent en préservant le lien de confiance? Ces questions font désormais partie des réflexions des organisations à mesure que les solutions passent de l’étape de projet pilote à celle de production.
« Les agents sont des machines. Alors, comment les évaluer? Qui est responsable de leurs erreurs? » — Raghav Kumar, CGI
6. Les responsables doivent participer concrètement; l’authenticité naît de l’expérience.
Le fossé le plus courant entre l’ambition et l’exécution en matière d’IA se creuse parce que les responsables imposent des outils et des stratégies qu’ils n’ont pas mis eux-mêmes à l’épreuve. Les deux invités ont donné le même conseil avisé : familiarisez-vous avec la technologie par la pratique avant de définir la vision. Non pas pour devenir un expert, mais pour parler en toute connaissance de cause, en comprenant ses limites et son potentiel.
« Dirigez par l’exemple. En matière d’IA, passez de la stratégie à la réalité. » — Raghav Kumar, CGI
Invité : Nicholas Morel, Spécialiste en IA générative – Google Cloud
|
|
Nick est le spécialiste en chef de l’intelligence artificielle (IA) chez Google Cloud dans l’est du Canada depuis qu’il s’est joint à Google. Il était auparavant associé chez Moov AI, une firme spécialisée en conseil en IA à Montréal centrée sur le de solutions d’IA personnalisées pour des entreprises comme Pratt & Whitney Canada, Merck, Métro et bien d’autres. Au cours des 8 dernières années, il a occupé des postes de direction au sein de plusieurs firmes en technologie, dans un parcours couronné de succès. Il adore aider les organisations à mettre la technologie à profit pour résoudre des problèmes concrets et à adopter cette technologie de façon à générer des retombées positives. Dans le cadre de cet événement, Nicholas se fera un plaisir de discuter des dernières tendances technologiques et de leur influence sur l’avenir, d’explorer les défis et possibilités qui se dessinent ainsi que de présenter ses perspectives pour aider les entreprises à conserver une longueur d’avance. |
Apprenez-en plus et abonnez-vous
Explorez d’autres épisodes du balado De l’IA au RCI et découvrez comment l’IA transforme les entreprises et les organisations gouvernementales. Visitez la page de CGI au sujet de l’IA pour découvrir les perspectives, les ressources et les dernières nouvelles à cet effet.
Lire la transcription (en anglais) :
- Section 1: Introductions and Where AI-Driven Software Delivery Innovation Is Taking Hold
-
Diane Gutiw (00:00)
Hi everyone, welcome back to From AI to ROI. I'm Diane Gutiw, leading CGI's AI Research Center. And today we're going to be building on the last episode, unpacking some of the headlines and innovations to understand what we're seeing from industry leaders and clients, and where all of this AI and software development might be headed. We'll do a quick overview of the topic, but I want to drive straight into breaking down the headlines and discuss some of the nuances of what's real and what might be coming next. We also want to talk about what people are worrying about and what we're hearing in the media as well as what we're not worrying about enough.
I'm joined by Nick Morel from Google and Raghav Kumar from CGI, both who are seeing this transformation unfold daily in their work with clients around the world. So Nick and Raghav, can you introduce yourselves? Maybe Nick, you go first.
Nicholas Morel (00:47)
Sure. Well, Diane, Raghav, nice to see you again. So my name is Nicholas. I'm an AI specialist here at Google. And my role is to help organizations navigate these interesting times where AI is coming into pretty much every conversation we're a part of and help unpack how this kind of actually applies to real business outcomes. So really happy to be with you here today.
Raghav Kumar (01:08)
Thanks, Nick. Good morning, Diane. Good morning, Nick. So my name is Raghav. I'm part of APAC from CGI India, situated in Bangalore. I'm part of their solutions team, you know, mostly working on IPs and also work very closely with our president on AI initiatives. I've been fortunate enough to work with Diane and Nick over the last two and a half years looking at impact of AI, especially in software development and how it can help accelerate and deliver value to our clients.
Diane Gutiw (01:38)
Fantastic. So great to be here with you guys. Let's start from the last episode where we heard about how AI and enterprise software delivery is bringing and requiring both technical capabilities as well as a real need for strategic clarity. So I want to drill down a little bit into what you're hearing from your clients currently across different geographies and industries and where we're seeing the most movement on this. Nick, where are you seeing the most uptake and impact of AI and software delivery?
Nicholas Morel (02:08)
Obviously, I mean, AI has become a very kind of tip of the spear conversation for a lot of the executives that we work with. So a lot of them are kind of wondering how are we applying AI meaningfully within our organization. But if we look at how AI is coming into the lens around software development and software engineering for most companies, we're really seeing a world where organizations strive for this kind of employee and agent future, where they imagine their software developers working alongside tools and or AI to help them be more productive and help them accomplish more within the time that they allocate to doing their tasks.
And of course, what's interesting is that as we start kind of capturing initial productivity, these conversations rapidly evolve into, well, how do we factor in AI across the SDLC itself? Because in order to really reap the benefits, you really need to start rethinking how work gets done. There's a certain amount you can capture at first pass, but in order to really get the transformative change, you need to really rethink how your teams are actually working. So that's kind of where these conversations are evolving right now in the market as we see them.
Diane Gutiw (03:14)
Yeah, I think that really resonates with what we're seeing as well. Raghav, in APAC and a lot of the clients you're working with, what are you seeing?
Raghav Kumar (03:15)
So as Nick said, one of the first requirements every client comes to us with is more focused on their IT division, which is more again software development, software maintenance, you know, how AI can be leveraged. But also in terms of domain and industries, when we look at it, it's a bit surprising. We have seen the most regulated industries jumping onto AI much faster, like the financial sector as well as healthcare.
And recently one of the clients I was with — I was very curious and I asked him, you being so regulated, you know, highly restrictive, and this is a very non-deterministic technology and you know, probabilistic — how come you people are in the forefront? And his answer was very simple. He's saying, you forget, we are part of an industry which is very heavy on documentation. You know, every handoff, you know, requires a lot of documentation and this technology really helps us harness that value.
So even utilities, which generally, you know, where I work very closely, they are the ones who generally wait for a technology to mature before jumping on it. But on AI, they were the first, saying, you know, it really helps us. You know, just imagine if I'm getting insights — there might be a failure in a certain part of a town or whatever, I can preempt it, and it really helps save a lot of people's work. So, you know, very interesting insights, but these were two domains which were very interesting.
And most of the clients are actually coming back to us in terms of asking how we use some of these technologies, not only for development of software, but in terms of use cases for their businesses.
Diane Gutiw (04:58)
That's really interesting, Raghav, that you mentioned about highly regulated industries. Where we're seeing a huge uptake is actually in public sector, which generally is more slow to adopt. And it's in the code modernization where there's a lot of different legacy code that needs to be moved forward.
And I think in a lot of these very manual modernization and knowledge acquisition opportunities is where we're going to see a huge benefit from AI and software development. Nick, are you seeing that as well?
Nicholas Morel (05:27)
Yeah, I was going to say, if I can add — I mean, we're seeing a lot of uptake and a lot of interest coming from the public sector and the regulated industries, because to your point, I think some of these organizations are literally set up with so much legacy software, legacy applications and legacy processes. And when you talk about mandates coming from the Canadian government or the US government or any other governments across the globe around finding ways to optimize the way that they provide services to citizens to reduce cost, there's a lot of interest around, well, hey, how can we go into these kind of previously untapped potential areas that we kind of disregarded because of how complex it would be to modernize some of that code?
And now with these technologies making this much more easier to grasp, we're actually seeing provinces and health organizations actually be first movers on this, because to Raghav's point, these are documentation heavy and process heavy organizations that have previously not really tapped into some of these possibilities because of the sheer cost to get in and play. But now they're really looking at agentic and AI as a way to really accelerate that transition. So it was kind of odd to see that kind of shape out at the beginning of the year, because you would think we always hear about the digital natives being the first on this and we're going to be an AI first company, we're going to build this. But then you start seeing these traditional businesses and or public sector players kind of leaning in really heavily on this. And it's been an interesting market dynamic shift this year.
- Section 2: What's Making This Innovation Stick — Accessible AI and a New Partnership Between Business and IT
-
Diane Gutiw (06:58)
I think one of the things that might be interesting to talk about would be, you know, is this different than what we've seen in the past in terms of organizations' adoption and value, Raghav? You know, what are you seeing where the foundations are maybe the same or different than what you would have expected?
Raghav Kumar (07:15)
Yeah, it's very different because what companies are realizing is this technology, as Nick was telling, helps them derive value of some of the assets which they have garnered over the years, you know, and that is data. They didn't have the right technology — maybe whatever technology existed was a bit complex to really harness the value. And now they see this technology helps them. Recently, we had one client who was very interesting. He said,
I've got so much data, I want to monetize my data, build something using this technology where I can sell kind of insights that come through this data, especially in this domain. It's very interesting — those clients generally come and talk about their own business processes, but he's finding a new business area where he can say, I can monetize my data, which was not possible. So that is what is fueling them. And to be honest, ChatGPT and Google Gemini, whatever technology or tools have come out, they have democratized the use of AI.
Traditionally, AI has been always looked at for geeks, R&D, very complex system, more of people sitting in a room with PhDs sitting and doing it. Now, it's much more easy. Most of the developers understand. So organizations are seeing, how can I leverage and take — and also their end users are changing. There is a big gap from people who were using certain services to more of this AI native crowd where everything has become personalized, everything is internet of me, everything I want to buy is only what I want rather than looking at what was available in the market. So I think these are fueling those industries and they are pushing. So what I see is that gap between business and IT is reducing, which was traditionally IT used to manage all the technology and business was just relying on their services. Now business is making those decisions. So that's the big change we have seen.
Diane Gutiw (09:07)
Yeah, Nick, are you seeing the same? Feeling that same value?
Nicholas Morel (09:12)
Yeah, 100% agree with Raghav. What's interesting is that to your point, Raghav, around these technologies being very rapidly democratized — I think all of us can remember back in the kind of 2015, 2016, 2017 era when we were talking about AI, most organizations were aware of this kind of technology coming their way, but really didn't understand it or really had no meaningful scale. But when we had that moment back in 2022 where these generative AI models hit the consumer market, it started immediately changing the way that we all as consumers expect technology to offer us experiences that reduce friction. And we started bringing those expectations into the workplace. And now we're having line of business and or operations and or the people in the organization that hold these processes come to the table, come to Google and come and work with their IT partners to say, how can we remove the complexity or the friction or the toil in this process, and to Raghav's point, really driving some of those conversations with us. So we've really shifted from very IT centric conversations around AI and or an AI group towards line of business enablement and figuring out what are the right use cases to tackle as a team. And of course, how do we work with the IT partners and security partners to make sure that we're doing this in a way that fits with enterprise architecture and fits with our security policies. But that partnership has really formed out in the last couple of years and it's really nice to see everyone kind of working in tandem here.
Diane Gutiw (10:49)
You know, Nick, I think it's really interesting — one of the points Raghav said about the accessibility. You know, traditionally to leverage AI for decision making, you had to lean on folks that were data scientists that had a deep knowledge of the information and were able to write and interpret the models. You know, in my field in data science, you know, we largely have been hidden in the basement, you know, and been able to then provide the answers to decision makers. I think one of the real shifts that we're seeing here is the accessibility — the natural language interactions with these technologies have really brought the information that's coming out of AI and the outputs closer to business, closer to people that are not mathematicians and able to understand the algorithm itself.
- Section 3: From Writing Code to Delivering Value — What Real AI Adoption Looks Like on the Ground
-
All right, I'm going to shift to something more edgy now. Let's talk about what we're hearing in the media, because I know a lot of my client meetings start with, you know, we're hearing that the entire software development industry is going to change, is going to go away, that X percent of code is now being generated by AI. I'm interested in what you're both seeing when it comes to the shift of how software development is going to fit into the bigger picture in the future. Nick, what are you seeing?
Nicholas Morel (12:05)
Well, what's interesting in this regard is that we're seeing a lot of the more of the same conversation where we're seeing organizations completely rethink. For example, if you are a product company or you are a team in charge of legacy applications and stuff, well, we used to typically intake and or build a roadmap of how we would evolve this product or service over time based on the constraint, which is the amount of people that we have. We always kind of kept that in mind and say, well, hey, I can intake all of these feature requests coming from customers and intake our desire to evolve the product from a product management perspective and deal with technical debt. And we would kind of allocate resources to try to tackle these three based on the capacity bounds that we have within the organization. But when we start integrating AI into the SDLC in more meaningful ways, and of course we start seizing those opportunities at first to maybe automate some of the documentation process and other things — well, when we start meaningfully rethinking how we work, we can start thinking about, well, hey, we have this newfound capacity with the fact that we've delegated or reduced our effort and basically cycles on certain lesser value added tasks and that we can now focus on driving the product forward in new directions that we previously had not thought of. And one thing I'll add in there is that we have this concept at Google — we've talked about 10x thinking. If we were to look at a process together and say, how can we make something 10% better across the SDLC, well, we could put our heads around it and start thinking, hey, well, we could change this little thing here and this little thing here, and then we can get some efficiency. But when we say, how do we make this 10 times better? Well, then our natural reaction is, hey, let's go back to a blank page and say, hey, how are we going to do this differently going forward? And we're seeing a lot of organizations kind of think 10x, but then kind of apply it to their reality and say, well, how do I evolve my ways of working to kind of go in that new direction. So that's kind of what we're seeing in the market — that new capacity and this kind of desire to rethink things, because that's the only way that you can move beyond these kind of smaller productivity gains but actually go into what most companies want, which is this kind of new ways of working, which is the frontier we're seeing people push towards.
Diane Gutiw (14:26)
Yeah, you know, that really resonates, Nick. The rethinking how we work is critical. Change management and not just automating old workflows. At the Google Kickstart event, I used the analogy of the Industrial Revolution and the knitting machine. You still need to know how to knit to operate the machine, but it's a real mind shift to go from being a knitter to an operator and an overseer of a knitter to an overseer of a tool and a person. So I think we're seeing some really interesting things in how we work and interact with these technologies for the best benefit rather than just automation. Raghav, your team has really dove in as the client zero, making sure we understand how internally we use technology like these AI tools for software development. Maybe you can tell and share some lessons learned from the work that you've been doing from the early days till now with using AI software development tools.
Raghav Kumar (15:28)
Thanks, Diane. Yes, so we have gone through many mistakes. Initially thought whatever we read on news or PPT — that, you know, there's a tool where I just install and it will solve most of our problems — but we figured out very soon. So it's not the case because finally those tools or technology is used by humans and that was the biggest change needed, because the first thing, especially with generative AI technology, is it doesn't give you the right answer the first time.
And humans tend to go back to their old ways of working, because when they give a prompt, it gives you some answer. And then it's not the right one. Your mind starts saying, go back to your old way of working. And that's what we realized — saying, maybe we should run a change management initiative where we start with unlearning your traditional ways of working, because this technology wouldn't work if you're still doing the same thing which you were doing, as Nick was talking about, and then shift.
So recently I was reading about what Mustafa Suleyman was talking about, vibe coding, and you know how AI is impacting — and that's what related to the work we have done over the last two and a half years, is how we shifted from just writing code to actually being more — in terms of developers — actually focusing more on architecture and the value we are delivering to clients.
We're really looking at what this code does rather than figuring out, spending time about how do I write this syntax? How do I write these lines? What function do I use? Which is now done by this technology.
Very interesting — when we started this change management process, we have realized, well, the juniors were very quick to adopt. They were like a clean slate. They had to just shed some years of experience. The seniors were the hardest because they go through a hard wiring of years of working in a certain way. And it took some time for them to realize — especially the architects and some of the SMEs who felt a big relief, saying how this tool is helping them, you know, focus on their actual work rather than where everyone runs to them for every small problem.
And they were really happy. So that was the first step. We started celebrating these wins and we tell the same clients the same story, saying, please don't look at it as a technology. It actually requires a lot of investment of time and big change management. And if you really do that right, it really helps the team give you that right productivity.
The other thing was we also realized there were two important intangible benefits we started seeing. One is focus time.
These tools are integrated into your IDE. Even before these tools, normally a developer would spend at least 10%, 15% of his time going on to Google or any of those sites to search: how do I read this code? What do I do? I'm getting this problem. Now he's doing everything within the same IDE. So there is a lot of focus. He's very focused. That is really helping him become more productive. The second part was the teams have become more collaborative. Using these tools, they can generate the documents. A tester understands what is the latest requirement. He's not sitting on a stale requirement document. The developer really understands what the product owner is trying to say. And for the product owner or the SME, he's not waiting on a guy to build a prototype. He's sitting in front of the client and building a prototype then and there, showing what the client wants.
So he's actually feeling like he has got some tool which is making him a superhuman. And then he just takes the prototype to the development team and saying, this is what I want. So people see something working in action and they just extend it, make it much more production ready, rather than looking at a document where you miss something and my understanding is different from what actually the client wants. That is bridging that gap. So there is a lot of collaboration also in a way being built within the team.
The biggest thing was people are no longer afraid to ask questions because these tools really help them understand. They know exactly what is missing. They're questioning back different teams saying, you know, you're testing — maybe you missed the scenario — or asking the product owner, maybe, you know, we should look at this feature in a different way. That I think, you know, people are realizing, and we have seen a lot more benefits more than just looking at productivity. The team dynamics have changed and in terms of adoption, this has accelerated us, you know, our speed a lot.
- Section 4: From Pilot to Production — What It Takes to Scale AI Across the Delivery Lifecycle
-
Diane Gutiw (19:54)
I think that's a really great insight, Raghav. I'm interested in looking at it from another angle as well. I know we've heard from the MIT report that keeps echoing at a lot of client meetings that nobody is getting value from AI technology, agentic technologies at this point. Also, we hear the statistics being thrown around that 85% of AI projects fail.
Raghav, what are some of the things that you've seen when it comes to value? And do you have any ideas on how to ensure that the project is successful through to its completion and tools are moving into production?
Raghav Kumar (20:32)
So, yes, you're right, Diane, and most of the clients also, you know, when they come, we actually tell them — it's not the technology which is the problem. I think it is the surrounding ecosystem. They should not just look at this technology in silos. So one is, you know, when you don't have the governance framework put in place, your workforce is not trained correctly to use this technology — as a change management or even in terms of adoption.
The other thing is in terms of the ROI measurement: I really don't know why am I using this technology. People are just looking at only one lens — can I reduce my team? That brings in a lot of uncertainty within teams, saying, if I use this technology, I lose my job. So we have to put it in more of a constructive way, saying the whole technology is used for betterment of their work and also to deliver a lot of value to the client.
And that's what we tell the clients, saying culture makes a big difference in this. And then siloed implementations — everyone is trying to reinvent the same thing because it's a new technology, they want to experiment, they want to show. The companies are wasting a lot of money. So we say, you build a dashboard where you at least have some governance in place, knowing what's going on. That will give you a lot of value. And we have also gone through the same mistakes. Now, at least we have more centrally governed in terms of the work we do. We don't duplicate work, we reuse assets.
So that I think has been some of the good learnings. But the reason why things don't work was on that part and, as Nick was speaking earlier, saying more on that POC fatigue — people have done so many POCs, now they want to see what it actually gives. Does it really work or not work? If I extend this POC, will my cost balloon up or will I really see value?
Diane Gutiw (22:18)
So Nick, does that resonate? Are you thinking, hearing the same thing around project success rates?
Nicholas Morel (22:24)
Yeah, I'm saying that the old rule here stands true. I mean, we've heard figures from 85% of projects fail to 95% of projects fail and customers' concern around how much of these actually yield value in production after. I think it really comes back to how you choose the use cases or areas that you as a company want to choose to tackle first. Generally speaking, if you are mindful in terms of where am I going to see — where do I see a process that's worth reviewing, where do I see a part of my business that is worth reimagining — these are the types of questions that would start help feeding alignment around where as a company are we going to evolve and start tackling problems. Because when you have the pilot palooza going on in hackathons in every single group within the company, you rapidly end up with hundreds or thousands of use cases that everybody tries to kind of hack their way through. But because they're not thought through as an MVP or as a production ready type deployment, they don't have things like proper access controls and governance. They don't have things like proper thought around how are we going to serve this tool or this application or web app to our different employees internally.
I think it's to pick the ones that we want to mobilize all of the resources internally towards making successful so that we can build that muscle as a company and start evolving and starting shipping more and more of this.
But that package entails change management, entails sponsorship, and it entails the right guardrails and security and governance put around these types of solutions so that they can not only see production but actually evolve over time. So that's kind of how we see companies that are succeeding patterns versus companies that are still falling into that old trap of let's kind of create hackathons everywhere and do a million things at the same time and then say that we didn't put 95% of them in production.
Diane Gutiw (24:28)
You know, I think that's a really great point that you both had there, you know, and understanding what the balance is between innovation, guardrails, and actually achieving real enterprise scalable value.
And hackathons are still critical in giving people the freedom, the space, and the environment infrastructure to be able to really, truly understand the value they can get from these tools. But to your point, that governance model, having guardrails in place, having an infrastructure that enables and an organization that promotes moving these things past that initial investigation stage into real scalable value for the enterprise is critical.
You know, I'm hearing people know that they're going to get value from the tools. They want permission to be able to go forward. They want to understand what the guidelines are so they can go forward safely. How do we get that balance, Nick? Where do you think that balance may be achievable?
Nicholas Morel (25:27)
So it's really interesting because we always talked about — like, many years ago, it was kind of pretty popular — companies would say, hey, I'm an AI first company because I created an AI innovation lab within it. And then this would be a very siloed group. Now some companies are tackling this with hackathons and they're saying, hey, we're doing hackathons. Therefore that's kind of an AI strategy. We're bottom-up in it. But I think that unfortunately, back to usual, it usually starts at the top.
So I think leaders need to come up with a clear vision of what they want to be providing their teams, coming up with some kind of directionality in terms of platforms, terms of technology, and even some kind of a governance framework that is going to be applied to saying how are we going to see these solutions through. And then I think it's okay once you've provided these types of boundaries to do hackathons and have these teams think of things that fit within the organization's appetite to sponsor and move these things forward.
So I think it's vision, platform choice, prescribing certain areas, and then opening up the floor for that kind of line of business opt-in within those guardrails — and not let's start with AI first and just think everything is kind of a white page and everything is blank space. Because you might as a company have a policy that certain types of data does not go into the cloud. You might have a policy as a company that certain types of functions within the company you are not comfortable right now relegating to an AI system — for example, how do you manage your customer interactions? So once you provide these boundaries, I think it's much better for a company to be able to evolve naturally towards achieving those goals rather than just kind of a blank page approach for everything.
Diane Gutiw (27:07)
Yeah, great point. And actually it leads on, Raghav, to what I'm curious with your team's work. You know, CGI is invested in 50 years this year. It's our 50th anniversary, in a culture of security, responsible IT, responsible use of data. How did our existing framework for responsible use of technology help inform how the team could use the tools?
Raghav Kumar (27:34)
Yeah, and I mean, you bring up the right question. And the answer is, you know, that actually fostered a lot of cultural thinking, you know. People are much more tuned to say what data I can use, what data can I put in public, you know, they are tuned. And also initially, you know, to Nick's point, where he said, you build the vision — the vision was to build those responsible AI guidelines before we jump on starting exploring those tools.
I think Diane, you did a lot of good work where we had the risk management framework put in place. How do you manage risk, the risk coding mechanism — and also teams were more tuned in terms of what tools they could use. And also some of the contracts we signed both with the hyperscalers in terms of the guardrails put in place, duplicate code checks. So people had that safety net, knowing what I can do. So even when they were exploring in sandboxes, which are again with the right security and all the guardrails in place — they had environments to explore. So it was a playground with the right guardrails, which helped people foster and hackathons — thanks, we have done many with the different hyperscalers, even with Nick's team. It generated so much interest. Now again, they've come back and it actually pushed the innovation.
Given our company scale and the kind of investments they have in — because people were initially skeptical, well, CGI being so strict about certain things, be open to this technology — but the way it was done, people saw the value. And now they started developing so many new accelerators, small reusable assets, some IPs as well, which have come up in the last four or five months, especially registered IPs, especially in the AI space. And these have become good examples for partners, where they said, I can experiment, this company is really pushing me. And they're not worried about security because it's part and parcel of their day-to-day work. It's nothing new for them. So AI didn't bring something new which they didn't know. They were already tuned to that company's culture as such. I think that played a big role in terms of how we didn't slow down, but we actually accelerated with this new technology, despite some challenges what people put out in the news.
Nicholas Morel (29:54)
You bring up a really, really interesting point, Raghav. And if I look at how I talked about the importance of having that kind of vision and that kind of prescribed area and or framework in which we could work — if we look at some of the projects that we even tackled with CGI as a customer, because it's always odd I look at CGI — me as a customer, but also CGI being one of the largest SIs globally that also works with hundreds and thousands of customers daily. But if we look at how you even adopt these types of tools internally, when we looked at different areas of impact last year, we quickly referred to CGI's policy around how is that data sensitivity classified? Is this data possible to use as part of this workflow? Can we use this data in the means in which we are planning on using it? Are there any regional requirements for which this data must be processed at a specific area in the globe? And or all of these other policy decisions informed to a certain extent the scope and some of the limitations that we had to keep in mind for our product. But that didn't make our product less good or less performant. It just created a product that was much more easier to see a path to production because we actually thought about all of the things that were coming from the CISO office, the CIO office, the CTO and leadership, so that when we built this project out, we could ensure that this project would actually be seeing value in production and not fall within that 95%, 85%.
And I think that as CGI looks at externalizing these capabilities to the customers that we both either jointly share or don't, this type of direction or guidance will be key in helping those customers not fall within that trap of a great project, great idea, great POC, not in production. So thinking about these things first — I think that the way that you guys do it as an organization will help inform better choice of use cases, but ultimately ensure they go to production. And the way that you take that to the market will be very different than other organizations that are coming with a very kind of packathon and POC heavy approach and not thinking about what does it mean to take it to production.
- Section 5: What's Next — Agentic Software Delivery and Leadership for the AI Era
-
Diane Gutiw (32:01)
Yeah, great points there, Nick. And this is a great segue into where we want to go next, is talk about what's coming next. And your comments on information data sensitivity is exactly how we worked with Google in understanding what the model was for how we pass information back and forth between agents based on information sensitivity. One of the things that we've talked about a lot in working with organizations that are adopting these technologies is the culture shift from a culture of risk aversion to a culture of innovation — requires people to be able to give themselves permission to use the tools and organization to give permissions. Again, something that we talked about a lot at the Google Kickstart is that it's not cheating to be efficient. It's cheating not to use your own critical thinking and your own creativity and your empathy and all those things that we bring to software development and building out new solutions. So I think it is really interesting. So let's look a little bit at the future. What's on the horizon, both of you? What are you seeing that's coming next that's exciting for the technology as well as for the future of work? How we're working with these technologies and how we're going to be interacting, the shift in our workplace. You know, we can look to the work that CGI and Google did together with some of our legacy workflows and interactions, building responsible AI into what you're seeing in the industry. So, Raghav, maybe first, what's exciting to you and what's coming up that's cutting edge that your team is getting excited about?
Raghav Kumar (33:38)
So the new thing which everyone talks about is agentic — agents — and how it is going to impact the work we do especially in software development, and most importantly, hybrid crews, you know, where it will be humans interacting with agents.
I think that's very interesting and we are actually looking at saying how it is going to impact the current way we do work, the teams we have, the team structures, what it would be like.
And most importantly, I had a very interesting question from a journalist asking me, saying, when you say agents, they are machines — how will you appraise them? Who will be responsible for their mistakes? So that actually pushes us to saying, you know, guardrails become more important. We have to define guardrails, saying, you know, in this whole evolution, how will the human agent interaction be? How will we measure agents' value? You know, what would be their cycle? And you know, when agents drift, what should happen to them, how will we handle that? And in terms of whenever there are issues, who will be responsible — whether it would be the agent or the person who's handling the agent or who developed the agent. So CGI, I think, has something where as a human agent interaction framework, I think it is more on the responsible side and it might evolve.
But yeah, that's very interesting and what we are going to see.
Diane Gutiw (34:57)
Fantastic insights. I'm really interested, Nick, from the Google lens, what's exciting you right now?
Nicholas Morel (35:05)
Well, I mean, from our perspective, what's interesting is that we're not really evolving much of what we had brought to the market last year, where we're really talking about agentic AI. But what we're seeing is a bit more of a focus on how is this kind of agentic workplace transformation going to happen?
And how are we seeing this as really a fundamental shift in how work gets done? And this, of course, is the vision — and how it will apply to the organizations will all kind of differ in terms of how they adopt these. But it's how do we move beyond basically that interacting with AI as a tool to a future where AI agents are actively and autonomously participating in our workforce, in our workplace? So we really see this kind of empowering this kind of human agent workforce. And that's going to reimagine a lot of the ways that we do current processes and ultimately it's going to reshape pretty much every task and every job within every industry at term. So when we think about these kind of North Star kind of objectives around, hey, how do we foster this innovation or this culture where we're going to be moving towards this agentic workplace transformation — well then, how are we as a company going to set ourselves up to kind of go through that change over time?
But this only happens and is only made possible once companies like Google provide tools that entail and offer governance opportunities for these agents. Because to Raghav's point, these agents must be measured on their performance. They must be measured on what they're yielding. Are they yielding something that's within the bounds or the expectations we have for the roles that these agents are accomplishing within our workflows? And without clear visibility in terms of measuring whether or not these agents are performing the parts in which they are expected to do in a human in the loop or non-human in loop process, you can't work towards this agentic kind of transformation narrative. This is really a people, process, and technology conversation where the technology is less to be proven in a lot of cases because these tools are very powerful — but how you integrate it with people and process becomes the crux of how those that will succeed and those that won't will be defined.
Diane Gutiw (37:18)
Yeah, and you bring up a great point. You know, the technology is exciting and it has been moving so quickly over the last four years. A lot of organizations are just trying to keep pace. And I get that question a lot: how do you stay ahead of what's coming next?
But what I really feel now that the tools are becoming more familiar in organizations — everything from Gen AI, large language models, small language models, the agentic AI — we're seeing this shift to user experience and change in the organization. We're the last generation of people that will be managing just people. What does that mean? And then a great conversation last night we had with a client around how should a person interact with these tools, because there's a bit of fatigue with the chat interface. When do you push a button? How do you present information back so a person can validate it and is actually getting the value from saving time? And I know we looked at a lot of that in the agentic workflows that we worked on together. So I think that in itself is a field that's really going to evolve in that process people side.
Nicholas Morel (38:22)
And if we look at the exciting things coming our way from Google's perspective — to Raghav's earlier point around the SDLC — if you look at our tool called Google Anti-Gravity, which is basically an agentic development platform that is an agent-first IDE that was made available in the market last year for people to test out. And the market uptake and the interest has been incredible. But it entails all of this excitement around, how do we reimagine my ways of working if I am now evolving this kind of IDE to be an agent first? So allowing some parts of my code or some parts of my process to potentially run autonomously. But I would end on — not making a choice or not choosing a tool is not going to — well, how could I say?
The AI for everyone story is truly where we're going and organizations must challenge themselves in figuring out how do we provide the right tools for the right people within the organization, because if you don't provide one, you're inviting shadow IT and you're inviting people to use their own consumer based products to potentially use tools to be able to do their work more efficiently. And that has insane ramifications when it comes to security and data privacy and data protection, because you end up now conferring this potentially very sensitive information to public models. So I think as an organization, you need to think about, okay, well, how am I going to provide tools as part of this transformation so that people feel that the company is not only providing guidance but also giving them a safe space to interact with these tools and technologies.
Diane Gutiw (39:58)
This has been an absolutely fantastic conversation. I really appreciate both of you diving in here and talking about your real experiences with the industry picking these tools up for software development. I have one last question before we wrap up that I think a lot of our listeners will be interested in. If you were to give some advice to a leader that's about to start this software development process, leveraging AI assistance or AI agentic tools — what's the one thing that you would give them as advice immediately? So Nick, let's start with you. What advice would you give to an organization just beginning this SDLC AI assisted journey?
Nicholas Morel (40:40)
Well, I would say that the first one is to educate and familiarize yourself with the tools. We often take for granted that the executives or the leaders that are making these decisions understand the technology for which they might be prescribing or making a vision on. But what we've actually found through numerous workshops that we host here at Google or with partners is that a lot of executives are not necessarily as aware of the depth of the technology and what it actually could accomplish. So if we're looking at this from a software development or from an engineering perspective, we might have some leaders here that have removed themselves from the keyboard over the years and are potentially less in tune with the realities of what it looks like on a day-to-day basis on the shop floor, as we could say. So I think understanding that these tools are evolving so quickly and that this technology is shifting at light speed — it's okay to forgive yourself for not knowing everything. But I think the opportunity for you to lean in as a leader and actually get a sense of what these tools can do, play with Anti-Gravity, play with all of these IDE tools and stuff like that to a certain extent, so that you can familiarize yourself with the power that these can come with. But you can also recognize where are the current gaps. These tools don't do everything. They're not perfect. It's not a magic seasoning that you can sprinkle on your SDLC and it'll automatically make it function better. Understanding these things I think will make you much more authentic as a leader to be able to prescribe where the company might be evolving towards or going towards from an AI across the SDLC part of your organization.
But also, I think how you as a technical leader can work with the other non-technical executives to partner with them to understand what are the challenges that they face? What are the real business outcomes that are currently stuck in legacy applications, legacy code, and legacy tech debt that you and your team with these new tools can now supercharge and unlock for them? Because ultimately when the technical teams in the line of business come together and agree on a business outcome that matters for the board, these are the real types of projects that can mobilize organizations to evolve their ways of working around something that is deemed meaningful and worthwhile for the organization itself. So two things: familiarize yourself with the tools and technology and its capabilities and what it can and cannot do. And also partner with the non-technical executives within your organization to figure out how can you unlock value that's trapped right now, trapped in the past, that you can help propel forward. So I cheated, Diane — you said one thing, I gave you two, but I think those two things really work hand in hand from my perspective.
Diane Gutiw (43:19)
Yeah, fantastic points. Raghav, does that resonate? What would your advice be to a leader?
Raghav Kumar (43:24)
Yeah, I think similar to what Nick says. So one is lead by example. The other thing is move from AI strategy to AI reality. You know, they have to adopt and adapt because it's very important — when you talk about a new technology, your team is looking up to you, saying, you know, first, did you use it? When you come to me and tell saying, you know, how does it work?
So that's one thing. Second thing is the more you know about the technology, you'll be better able to articulate to the non-technical people, saying how this tool can help them, you know, make their lives better. You know, their day to day — eight or nine hours, whatever they spend — in terms of how it can help them become much more productive, use this to deliver more value to the clients or whatever conversations they have. You also know what are the limitations of the tools. So you're not going and promising the client the moon, but telling them actually, what is the reality? And that's been our experience. Your teams really start listening to you because that brings in a lot of authenticity.
Diane Gutiw (44:25)
Absolutely great points, you know. This was a fantastic conversation. I want to thank you both for joining me here and opening up the lid on what we've seen over this last couple of years with software development AI assistance tools and where we think it's going. I think what resonates and strikes me the most coming out of this is this isn't really a technology play anymore. We're talking about building success with software development through people, through process and having the right foundation as well as the patience to do this right.
So thank you to both of you. I look forward to continuing this conversation in the future.
