In part two of the Energy Transition Talks series on AI and the energy transition, Peter Warren and Dr. Diane Gutiw, CGI’s Global AI Research Lead, move from theory to execution. They focus on how industrial organizations can scale AI responsibly, protect operations and unlock value from the Internet of Things (IoT) data already flowing through their systems.

As AI moves closer to core operations, success depends on two outcomes: managing risk with discipline and investing where measurable value is clear.

Catch up on the first episode here.

Managing AI risk to protect operations and trust

AI already processes information and identifies patterns at speeds beyond human capability. In high-stakes industrial environments, that power must be governed carefully. Dr. Gutiw highlights four risk areas leaders must address to maintain control and protect long-term value.

1. Bad actor risk
Organizations must prevent AI from being exploited for cyberattacks or malicious automation. Strong cybersecurity, model controls and governance guardrails are essential.

2. Misuse risk
Without clear standards, employees may unintentionally expose sensitive data or intellectual property to public models. Structured IT governance and data policies reduce this exposure.

3. Misunderstanding risk
AI tools can appear authoritative. Leaders must build AI literacy across their teams to ensure employees validate outputs, apply judgment and maintain accountability.

4. Missed opportunity risk
Perhaps the most significant risk is underutilization. Failing to apply AI to real operational challenges—such as safety, workforce shortages or asset reliability—means leaving value unrealized.

Energy and utilities organizations that implement consistent AI governance today position themselves to innovate with confidence tomorrow.

Investing in AI that delivers measurable business outcomes

Peter and Dr. Gutiw reinforce a simple principle: AI investment in the energy and utilities industry must be value-driven. Rather than asking, “What can we do with AI?” leaders should ask, “What operational problem are we solving, and what measurable outcome will improve?”

Across industrial sectors, successful deployments share common characteristics:

  • Clear alignment to strategic priorities
  • Defined metrics for quality, speed or cost improvement
  • Centralized standards that ensure trusted data
  • Scalable architecture that avoids one-off experiments

When governance and data reliability are in place, organizations can move faster. They reduce duplication, accelerate adoption and build reusable AI capabilities that scale across business units. When energy and utilities organizations move beyond experimentation for its own sake, the result is sustained return on investment and operational resilience.

Turning IoT data overload into predictive insight

Energy and utilities organizations generate enormous volumes of IoT data from assets, sensors and field devices. For years, much of that data has been underutilized. Agentic AI changes this dynamic.

Instead of relying solely on static dashboards and threshold-based alerts, leaders can interact directly with their data. They can test scenarios, explore patterns and receive predictive insight before issues escalate.

For example, across the energy and utilities sector, AI can:

  • Anticipate power overloads during extreme weather
  • Identify early signs of asset failure
  • Model staffing needs during peak demand or holiday periods
  • Optimize resource allocation across distributed networks

This shift from reactive reporting to predictive intelligence transforms data into an operational advantage. As Dr. Gutiw notes:

CGI's graphic device

“The more information these technologies have available to them, the better insights they can gather. IoT data is where we can finally get real value out of all that information we've been collecting for years.”

Improving safety and performance in the field

In remote and high-risk environments, AI also strengthens health, safety and environmental performance.

By integrating AI with drones and field assets, organizations can conduct inspections in hazardous areas without exposing workers to unnecessary risk. Predictive monitoring helps identify potential safety issues before incidents occur. In rural or disconnected locations, intelligent monitoring systems can support faster response times and more informed decision-making.

The outcome is clear: safer operations, stronger compliance and more resilient field performance.

Building a foundation for controlled, scalable AI growth

Industrial AI does not scale safely across energy and utilities without discipline. By combining governance, literacy, data integrity and value-focused investment, organizations remain in control as AI capabilities evolve. They reduce risk while unlocking measurable operational improvement.

Organizations can unlock the greatest value by applying AI deliberately to improve reliability, strengthen safety and support sustainable performance.

Read the transcript

Chapter 1: Managing AI risk to protect operations and trust

Peter Warren:

Hello again, everyone, and welcome to part two of my interview with Dr. Diane Gutiw from CGI. This is again a continuing segment of our ongoing podcast about energy transition and the things that are impacting our industry. Diane, would you do me a favor of introducing yourself again?

Diane Gutiw:

Hi, nice to chat again with you, Peter. My name is Diane Gutiw. As Peter mentioned, I'm the global AI research lead at CGI as part of our CTO group. And a lot of my focus has been on both the responsible use of AI and advising different government organizations such as the EU AI Commission, the Welsh Government. I sit on the Strategic Advisory Council. And most recently in Canada, I'm the co-chair of the Federal AI Strategic Council and have been a member of the task force for the last couple of months, which are working with the government in refining our AI strategy moving into the future. So great to have this conversation. It's very topical and lots going on.

Peter Warren:

Yeah, it's happening in real time. And I always forget to introduce myself. I'm Peter Warren. I'm the global industry lead at CGI for energy and utilities. Diane, in the previous one we talked a lot about people getting their data right, changing things, the impact of the tools, how it's impacting folks. We didn't really talk about organizational change management. We really didn't talk about risk management and the impacts of risk, but also how AI is extending maybe right out into operations, into the IoT networks and so on. Where would you like to start on that subject list?

Diane Gutiw:

Yeah, well, I think it's important to talk about risk because a lot of jurisdictions and organizations are trying to balance their risks with innovation. You know, we know we're at a pivotal point in getting real value out of these technologies, and yet really understanding where the risks are and mitigating that risk is critical. And we're at a point in time, you and I, in a lot of conversations, have heard lots of fear on, what happens when we move to artificial general intelligence, and what do we do when this intelligence exceeds our intelligence? And so, I'd love to have a conversation on that. So why don't we start with risk?

Chapter 2: Managing AI risk to protect operations and trust

Peter Warren:

Okay. So, what do you mean by the last statement you made there about general intelligence?

Diane Gutiw:

Yeah, we're not at that point yet. But as we see the rapid speed that these technologies are evolving, we definitely are at the point where it is able to do some things better than we can. It is able to process information quicker, it is able to find patterns and information quicker. All things that we as humans can do, if we had unlimited time, unlimited data, and unlimited resources to do it, these technologies can do it better.

I liken it to a crane can lift a heavy truck better than a human can. And we designed it to serve us because it can do it better, because mechanically it was built to do that. AI can do some things better for us. But the fear is what happens when it can use that knowledge on its own without the control of the masters? And we're not at that point yet. And I don't know when we'll be at that point that it is able to go off and do things.

But there are risks at this point, that it's critical that we mitigate. And I can divide those into four. You know, first of all, there's the risk of the bad actors, those people that are using these technologies for bad purposes. No different than we had bad actors with SQL code and other things. However, the fear that the bad actors can use these tools to be able to design things that could be used in warfare, could be used in hacking, we need to invest and we need to stay ahead of understanding how do we protect ourselves from these and understand the types of things that can be done. And the EU AI Commission in the code of conduct have put a lot of onus on the general-purpose AI developers, the hyperscalers, to try to build in safeguards into the tools. But it definitely is a worry. We need regulation, we need investment in research, and we need to focus on it.

The second fear, which I think is more day-to-day, is the fear of misuse and the risk of misuse. So, this is when either people don't understand how the tools work and they build something, they put in personal information that shouldn't be in there and expose it, or IP, or they develop something that's unreliable and it's giving you a bad answer to a question with the premise that it has been tested. And then there's the fear of using it because there's not enough guidance on how to use it. And with that, we really again need guidance, guardrails. You know, this is how we deal with sensitive information. This is the type of information that's safe to use it in. This is how to protect your models so that you're locking down your parameters and your environment.

The third is the fear of misunderstanding, you know, and this is one we're seeing where people are starting to use AI therapists and AI boyfriends and relying on it to write entire papers without checking the facts because we misunderstand that it is just a tool that is reflecting back what you want to hear. And in the last podcast, you talked about exactly that. It sometimes tries to be flattering, or it connects dots incorrectly. So, we do need to refine that information and have a conversation, as well as understand what these tools can do and what they can't do. AI literacy is critical to that. We need to make sure, as citizens of public sector and organizations using these tools with our information, we understand what it is and isn't, so we can trust. And when we use it in a day-to-day, we can't over-rely on these tools. We can't take away our critical thinking.

The last, and this was a long answer to your question, and I think this is very relevant, probably to where you want to go next, is the fear of a missed and the risk of a missed opportunity. Because the benefit of these tools in AI for good is critical. The number of things that we're able to do to solve real-world problems, be able to do things more efficiently, resource capacity concerns, being able to provide more personalized services and directed services, there is so much good that we can do across different industries, reducing the screening ages for cancers, finding new drug discoveries. So, if we can manage all of these risks now, as we move forward and these tools evolve, we will have a really good foundation in designing safe technology so that as the technology evolves, we still are the ones in control.

Chapter 3: Investing in AI that delivers measurable business outcomes

Peter Warren:

Yeah, I think that's an interesting point there. And we're talking about risk. And I know that when we've been out talking to clients, they talk a lot about the human in the middle. They talk about people managing systems, having those decisions brought up. What are the five things I should be worried about right now? What should I be looking at? We see organizational change going on to adapt to these things.

The companies that are doing best, oddly enough, are actually putting in guardrails that are actually putting in standards, they're putting in saying from the IT department, here's what you can use, what you can't use, here's your rules and regulations to stop people from throwing up corporate data onto the system, onto maybe a cloud that they're not supposed to, and a variety of those other functions. And it seems counterintuitive, but the ones that are actually taking a breath and putting in the guardrails and putting in rules and regulations are actually moving faster and having more long-term success than the ones that just jump into it, in fact, because they're actually having reliable data and they're getting answers they can trust.

So, when you get into the next part that you're talking about there, about how they actually want to move forward for innovation and have that next level of trust. What are your thoughts on that? Because we have some, you know, people think that they should just be jumping in right away and doing everything, and oh my god, my competitors must be doing creative things. What do you what's the reality for you that you see?

Diane Gutiw:

I know you need to drive these tools with value, right? You need to understand what is the value I intend to get out of this? What is my return on investment for this? Is where you're going to make a difference. We talked about that, the let's stop talking about AI and let's start talking about the problem we're solving. Value-driven investment is going to be where it really moves the mark. And that includes, am I getting the intended value? Did I do this faster? Is the quality improved? Did I provide a better service, provide better, quicker information to my staff is really critical.

But also looking at the value of how do I scale? You know, okay, I don't want just a whole bunch of little one-offs, but I want an ecosystem that's bringing end-to-end value, and I want to be consistent and aligned. And that's where the AI governance is really coming in, is how do I align these things? How do I reuse an agent? I'm rather than building 10 agents that are generally doing the same thing, developing a way that you're being efficient in the agents that you're doing so that the outputs are consistent, the processes are consistent, and are the all the guardrails that go with it, how we manage our data, how we provide information back, how what the user interface and and user experience is like. If we can be consistent and scale that, that's where you start to get the real true value.

Chapter 4: Improving safety and performance in the field

Peter Warren:

So, when we look in operations and we look at companies trying to do things, and of course you talked about risk in our previous call as well, and that you know organizations, public sector are risk-adverse, certainly the energy and utility industry is risk-adverse. We've done things a certain way for a couple hundred years because that way it doesn't catch fire and it doesn't blow up. So, there's this real dichotomy of wanting to innovate, wanting to do things, but also realizing that they have to stay safe. How do you see reaching out into the IoT networks, reaching out into operations? How do you see AI actually helping in the field more than in the top office?

Diane Gutiw:

Yeah, the IoT data from all of the design devices, particularly in utilities and energy, where we have IoT devices in all of our assets on our power poles, and it's collecting data by the second. You know, so, up until now, we're suffering from a huge amount of data overload and not really clear on how to use that data to answer questions, how to process it.

So, we now have in our toolbox a fantastic new tool that can look through those huge volumes of information and gather insights. You know, you can then, as we were talking earlier, have a conversation with that IoT data. You know, based on this data, why am I having significant failures in this type of asset in these conditions? Is there something that I'm missing? And how can I get ahead of that? At what point can I be alerted that something needs to happen? The same would happen with fraud detection and all other things where we have just a mass amount of data coming in that in the past we're really just looking for alerts or when something exceeds a threshold. But how can we use that data to let us know in advance that something could use attention and we're going to avoid downtime, or we're going to avoid hits, or we can refine our fraud detection in the future because we have better insights.

Just like humans, the more information these technologies have available to them, the better insights that they can gather. And it's the same with AI, it's simulating human reasoning. The more information it has, the more rounded output it's going have. And IoT data to me is one area where we can finally get some real value out of all that information that's been collected.

Chapter 5: Building a foundation for controlled, scalable AI growth

Peter Warren:

In one of our meetings, that's when you made that statement there about having a conversation with your data. That's when one of the gentlemen in one of our meetings said, “now you're talking.” He said, “now you're making sense to me” because they see a lot of people throwing things into the market saying, I made an AI something this or something that. And they said, in their personal view, that they could mimic all of that. In a few hours, they could duplicate that function. It's really getting into that conversation with data and having a trusted interaction with it.

Now, in energy and utilities, we've been having a few people look at that. And you looked at also like the cause and effect if something's been going on and going wrong and moving forward, and certainly, where we've applied that in machine vision and machine learning, et cetera, in systems, we've even noticed a difference that even a software or firmware update on a piece of hardware totally changed its performance.

Diane Gutiw:

Yeah, absolutely.

Peter Warren:

And looking at this whole ecosystem, but you would normally see that. You mean when we do updates to our PCs, we always see, oh my, it's not running as fast as it did before. But this was really catastrophic that something was wrong. So, what would be an example in your mind for a company in our type of industry that would have a conversation with their data?

Diane Gutiw:

Well, I think it's like having an expert that has access to all of your corporate information, images, documents, guidelines at any time. And the ability to use that agentic model where you're having a conversation with an orchestrator that can send off his team of agents to say, well, let me see how my IoT data is related to this alert that came up last week, and if I can get to the bottom of it. Or if you're looking at the ‘what if’ analysis, which to me is absolutely brilliant, you know, if we know that you're going to be short of staff on this day, how do I allocate my resources so that I can predict and prescribe where to put people so that if there is downtime over Christmas? How we can deploy the right people to the right place in a safe way, that's getting the best outcome, right. And those are the sorts of conversations that you would have with the data. If you had a whole group of experts sitting around the table that you were able to just throw these questions at, that really is what brings the value.

And so, we moved the dial a little bit in this as well, next generation dashboards where we have been collecting this data, creating this semantic layer of data that feeds these dashboards. Well, guess what? We can now have a conversation with that and develop the future of dashboards. We had a client who was an assistant deputy minister that used the example of, you know, I have to go talk to the deputy minister. I get six people going through my dashboards, getting me discrete answers of questions, guessing at what he wants. And when I get there, I've often missed the mark and they're now running around getting new information real time. He said, I want to phone my data. I want to phone and say, what's the attachment rate to primary care physicians in rural areas? What percentage of chronic diseases? Who's going to acute care versus a walk-in clinic versus a nurse's clinic, and what's the cost of care? And what would happen if I added five more nurses clinics in that area? What would that do? Right. And that's the sort of conversation.

So, looking at your sector, it would be very much the same. You know, you could phone it up and say, okay, I predict that I'm going to have an overload of power needed in this area, and there's a storm coming in. What should I think about? What could I do? How could I deploy differently? And how do I get ahead of this potential impact to the best benefit?

Also looking at health and safety, that's been a huge thing. How do I deploy people, particularly in Canada, where it's very rural areas, very remote, often disconnected? And how can I do better in that? What opportunities do I have to keep people safe in challenging situations? You know, we already saw more use of drones. There's a great example. Have a conversation with your drone. So, what did you see? What does that mean, right?

Peter Warren:

Well, I'm going to leave it with that. I think that's a great idea for our audience to think about if they had access to all their data and had that virtual expert or virtual experts that they could have a conversation, what kind of questions would they ask? And if so, how would they respond?

 With that, Diane, I'd like to thank you very much for the second installment in this series. We'll pick you guys up in the next podcast. Thank you very much for joining. I'm Peter Warren, and you are…

Diane Gutiw:

Diane Gutiw. It was great chatting with you, Peter, as usual. And I'm sure in three months we'll talk, and the whole ecosystem will have changed again.

Peter Warren:

It is constantly evolving. Thanks, everyone. Bye-bye.