In this episode of our Energy Transition Talks podcast series, host Peter Warren and Dr. Diane Gutiw, CGI’s Global AI Research Lead, discuss the shift from AI experimentation to industrial-scale application. While the industry has long used machine learning and robotic process automation (RPA), a new era of agentic and generative AI is redefining how energy, oil, gas, and utility organizations manage their data and their workforce.

From risk aversion to production at scale

While many organizations are investigating AI, moving solutions into full-scale production remains a significant hurdle. Dr. Gutiw notes that the current challenge is not just technological, but cultural. Organizations, particularly in the public sector, must balance the drive for innovation with a responsibility for transparency and reliability. The goal is to achieve "everyday AI," where these tools are seamlessly integrated into operations to advance strategic objectives.

"We need to stop focusing on the technology for the sake of the technology and ask: what are the problems we actually need to solve?" explains Dr. Gutiw.

The rise of agentic AI

The industry is shifting from simple chat interfaces toward agentic AI—multi-model tools designed to execute manual tasks within complex workflows.

  • Task-oriented intelligence: Unlike traditional search-based AI, agentic tools can complete manual processes like entering, moving, and finding information.
  • Expanded data sources: The definition of data has evolved to include previously untapped sources such as narrative text, images, and video.
  • Predictive maintenance: CGI is using machine vision to auto-generate predictive maintenance data, reducing the need for manual inspections.

Managing the hybrid workforce

Dr. Gutiw predicts that we are entering an era where management involves overseeing a hybrid environment of humans and AI agents.

"We're the last generation that's going to be managing purely humans in our teams."

This shift requires comprehensive change management to redefine roles. As AI takes over manual tasks, humans are transitioning from "doers" to "operators" and "overseers" of AI solutions. This change introduces a critical need for responsible AI, including "agentic fact checking" to validate AI outputs and ensure data accuracy.

Digital triplets: The next evolution

One of the most transformative concepts discussed is the digital triplet—a framework that places a layer of agentic AI over existing data assets.

This allows organizations to communicate with their infrastructure or systems and ask "what-if" questions about grid changes or power plant distributions. The digital triplet enables iterative, granular decision-making, providing experts with deep, actionable insights into complex systems.

As Peter observes, “the real value is realized when organizations commit to revisiting and improving their AI initiatives rather than treating them as a single deployment.”

Join Peter and Dr. Gutiw for part two of the series, where they explore how these AI advancements integrate directly into IoT operations, and what that means for scaling intelligent, connected systems.

Listen to other podcasts in this series to learn more about the energy transition

Read the transcript

Chapter 1: From risk aversion to production at scale

Peter Warren:

Hello everyone, and welcome back to another installment in our continued series exploring how things are changing in the energy transition. This is part of a podcast series, and today I'm happy to be talking to Dr. Diane Gutiw, who leads up our AI practice globally. Diane, do you want to introduce yourself further?

Diane Gutiw:

Thanks, Peter. As you mentioned, I’m with CGI, I'm our global AI research lead, part of the CTO team, and my focus has largely been on both responsible use of AI and setting out our organizational policies, as well as a lot of advisory work with different governments. We've had the privilege of advising the EU AI Commission on the code of conduct for general purpose AI. I'm an advisor on the Welsh Government AI Strategic Council, and most recently have been involved in Canada. I'm the co-chair on the Federal Government Strategic Council there and was very lucky to be selected as part of the smaller task force helping refresh the AI strategy for Canada, which is underway right now.

So, lots going on as organizations try to understand how to leverage these technologies to the best benefit, how to put guardrails around the use. And then the other side of what we do, which I think we'll be talking about as well, is with our research area is applied research. Where do we get the best benefit for the use of the tools and working on some really cool initiatives with clients across all sectors with some really interesting things happening specific to energy, oil, gas utilities.

Peter Warren:

That's very cool. And you and I have had the privilege of being in a few meetings together lately, and there were a lot of consistent themes, customers looking to leverage this thing called AI, trying to understand what it is, how to apply it, how to make it work correctly. One of our customers coined a phrase “everyday AI”, and we've kind of liked that. So, everyday AI and operations is something we talk about as well. And it's very interesting to see where things are going.

What would you say has been sort of the top theme that you keep hearing over and over again when you talk to people?

Diane Gutiw:

Yeah, you know, we're still in that place of, as you mentioned, trying to understand what it is and what it is in the context of organizations. Trying to move past risk aversion to take advantage. I would say most organizations are feeling they are behind in adopting, in moving AI solutions. There might be lots of investigations and pockets of implementation of tools, but moving that at scale, moving solutions into production has been slower than most organizations would like for lots of different reasons that we can talk about here.

The latest themes lately are more on agentic AI and where we can get what that even means and where organizations can get value from technologies. And lots of really interesting things happening both internally at CGI as well as in the market.

Chapter 2: The rise of agentic AI

Peter Warren:

Yeah, it's interesting, these terms agentic and things keep popping up. I mean, AI has been around for a long time. You point that out regularly that and certainly in this industry, things like machine learning, RPA robotic process automation, those type of things have been there. But I think when people say AI today, they really think the agentic or the generative AI, the things that are thinking and adding to it. How do you see that really changing today's operations?

If people have good data, and we've talked about that on these calls before, and they trust their data, what is agentic actually bringing to the story? What is that solving?

Diane Gutiw:

To take a step back, we're even rethinking what we're calling data. Before, data was discrete and operational, data that we have collected over centuries from our operational and enterprise technology solutions. Whereas now, data can be documents, it can be videos, it can be images. So, the scope of what we consider data has changed. And that opens up a whole realm of how do you manage that and what does that mean for data governance, as well as how does that mean for downstream systems? And agentic and generative AI would be those downstream systems that let you get value out of that data investment.

With agentic AI, it really has moved the marker on value because instead of looking at these generative AI tools focused on almost like a search engine, a chat interface of looking at something, getting access to information, having a conversation with your data, which is brilliant. There's no question that's brilliant.

Where agentic is moving the dial is being able to implement these multi-model tools for more complex tasks, for more reliable tasks, integrated into workflows to better take over some of the highly manual things like looking stuff up, entering information, moving information, finding information, generating stuff from other stuff. You know, it really is making it easier to get value because you're not just looking at the one-off focus, the interactions back and forth, but you can actually task it in a much more intelligent way to complete things that you would normally do manually.

And that's where we're really seeing the value increase. And the focus now is on where do you get the value? Where do you insert these and really look at the cost-benefit because there is a consumption cost, there is a license cost. So, where do you insert these so you're going to get the most value, taking those into consideration over what the human involvement is, as well as moving humans from the doers of these manual tasks into the operators of solutions doing these manual tasks?

And then management becomes managing these hybrid environments, looking at human AI interactions as more than just finding information or generating something, but actually completing tasks that you would have had oversight on that was 100% human. So, I said it a few times recently: we're the last generation that's going to be managing purely humans in our teams. And now we need to look at this hybrid environment where we're managing human-AI interactions even more than we were with things like RPA.

Chapter 3: Managing the hybrid workforce

Peter Warren:

That gets into a whole interesting subject of, you know, guardrails, safety, culture, people, humans managing all that. And you mentioned, you know, where when you talk about that, I remember one of our clients' eyebrows coming up when you talked about the fact that you're not going to be managing just humans in the future. You're going to have these other systems, these beings.

And I think that was one of the things that worked well in one of our projects when we taught the employees to think of this as a system that will do the things you don't want to do, the grunt work that you don't want to do or don't like to do. So that's a kind of cultural thing.

Where do you see that intersection between culture and technology? How are people adapting to that?

Diane Gutiw:

Well, it's still really early days. Let's not fool ourselves that this has really become well embedded across all industries. However, organizations that we're getting very familiar with, the general purpose tools, the software development tools, are moving quickly into these agentic solutions, as well as organizations are taking a step back to see where do we have bottlenecks, where do we have large amounts of very manual processes where we could move that to be more efficient.

So, the culture shift, there's a few. One is the culture shift of risk aversion, particularly in public sector, with good reason. The public sector has a responsibility to citizens to protect information, to ensure reliability, to ensure transparency. And with some of these black-box solutions, where you can't necessarily see how they were trained and where the information is coming from, having to design a solution that locks down the sources of information, provides that transparency, takes some time. So that has maybe made it a little bit slower for the uptake, as well as direction on guidance. What should we use, what should we not use, which is something that needs to be nailed down, both public and private sector.

So, when we look at the culture, it's moving from a culture of risk aversion to innovation. And that's a big shift. Change management is absolutely critical. The next is changing what we do. So, having to look at what were the roles before and how was that role shifting? If I look at somebody that manually entered lots of financial data into spreadsheets, you know, if they have a tool that's supporting them, how do they become an overseer? And what's the change in their role and the capabilities and the skills they need to be able to do that?

And peer reviews, which we're hearing in the news more and more, the need to not just take the outputs of these tools at face value, but check the references, because the tools can be notorious for helping you out by making up facts and references that may not be there. So, how can we develop an agentic tool to fact-check as well as an agentic tool to help enable us to get to that validation quicker? You know, don't use it as well, not don't do all the checking yourself. Ask for the URL, then say what page on this document or this website can I find this information and help it do the fact checking.

So, the next area, and that's a long answer to your question, but the next area is once we redefine what the roles are, we're also redefining what teams look like and what management looks like and what's the value we intended and how do we measure that. So, both change management and responsible use of AI are the foundation to help enable that so that people are comfortable innovating.

Peter Warren:

Yeah, and I had personal experience with one of the AI tools that we use here. It's a publicly available tool, and I asked it to talk about what we do in nuclear, and I think it was trying to be flattering and say we actually design and build nuclear reactors, which is totally false, completely wrong. And we do software solutions that support some things, but we don't actually build nuclear reactors. So, I had to go back and correct it and it thought for a while and came back and said, okay, thanks. I don't know whether it's going to be better in the future. I kind of re-described who we were as a company. And I think that is a function that we see a lot of.

When we’re out meeting with clients, a lot of them are still taking early steps. But I saw a lot of them doing the right things about getting foundations together, about getting structure together, about starting to organize their data. And you said data's changing. You actually talked to me about saying that you had a paper about “Stop Talking about AI.” Do you want to talk about that for a minute? Everybody says the word AI, but is that really the answer to everything?

Diane Gutiw:

Yeah, you know, it's a great point. And it's not gotten old to say we need to stop focusing on the technology for the sake of the technology and take a different lens on it and say, what are the problems we actually need to solve? What are our strategic problems, strategic objectives that we now have a fantastic new tool and set of tools in our toolbox that can enable. But you're exactly right. AI is not the answer to every problem. And when we take that different lens on what are the things that we want to solve? Often, it's a data integration problem, or it's a data quality problem, or it isn't necessarily all AI problems.

However, I won't hesitate to say we have a fantastic new tool in our toolbox that can parse narrative text. That was really complex to model and very expensive and time consuming to be able to get real value out of narrative text. We're able to assess and create synthetic data, not just for relational type data, but also for images.

So, a great example would be work that we're doing with our machine vision, which is I can take the front side of an asset and be able to auto-generate what the backside should look like. So, when I'm doing predictive maintenance, I can say this is worn down by X percent in this part of this asset without having to have a camera go all around the 360. There's all kinds of really great things that we're able to do with these tools when we focus on what is it you're trying to solve, rather than, wow, what are we going to do with this agentic tool? Where is there a workflow that we can focus on?

If we take a step back and say, where is there a problem? Where are we missing our SLAs? Where do we not have the capacity to do this, or where does our quality need some support? Those are the use cases I would say you need to jump on and see, is there something we can do with these tools?

Chapter 4: Digital triplets: The next evolution

Peter Warren:

Yeah, and I like the video imaging thing that you were talking about. We've certainly got a couple people looking at that, sort of the as design, then the as is built, maybe using something like a LiDAR radar scan of it, and then cameras to do the as it currently is and what's happening in real time. And that's become a real thing of people looking at it.

But also in processes, I think people tend to think AI is a “one and done,” but I'm seeing that as we do services and some of the stuff we do when we do managed services and so on is an iterative approach. Go back and look at not just the one piece, but the whole process again and see it from end to end. How do you see that working out where people are actually going back and not being lazy and going back and actually taking another shot at it, another shot at it? You can get some goodness in the first pass, but how is this iterative approach working?

Diane Gutiw:

Yeah, iterative, you know, having a conversation with your tools and conversation with your data. And actually, you've just twigged me on something that I wanted to mention to you, even outside of this podcast, is our digital triplet framework, for example, which for folks that may not understand is the ability to put this layer of AI, agentic AI, over top of our existing data assets. So, it could be anything from entire infrastructure that you're looking at all your systems and how they work together and have a conversation with that data. If I was to change this in my grid, how would that affect other things? If this happens, if I put a new power plant over here, how would I distribute? Which is one look, but also all the way down to healthcare, where you can look at an entire human body system and see complex information and connect dots between clinical notes, current lab tests, and so on. So that's our digital triplet.

I was listening to a really interesting discussion with Jeffrey Hinton, who's one of the Canadian thought leaders in AI, around we're almost at the point we can have a conversation at the proton level. We can have a conversation at the genomics level. We're there. That's a digital triplet. So, if the data exists, we now have tools that we can fine-tune to understand the context of genomics, protons, and then have a conversation with that data. What would happen if I was to do this? How is this different than I would expect? What is the meaning of this? And what's my next best action if I want this result?

So, to your point on iterating, having a conversation with an expert that has a deep knowledge in whatever ecosystem you're talking about, that digital triplet concept is really moving the dial on being able to get better information about our assets, about information, and about that, even to that fine level of granularity. Like, what's the backside of this look like? And what would I expect to happen if this thing was to break down? What's my next best action?

Peter Warren:

Well, that's a great place to end this particular episode. We'll be picking up in part two and continue our talk with Dr. Diane Gutiw on diving a bit more into the connections into IoT operations and other elements. So, with that, thank you very much, and we'll pick you up in the next one. Bye bye.