In part two of their Energy Transition Talks conversation, Peter Warren and Frédéric Miskawi, Vice-President of Global AI Enablement at CGI, build on their earlier discussion about moving from AI pilots to measurable value, focusing on how energy and utilities organizations can balance power, cost and trust in a rapidly evolving AI ecosystem.

Small, smart and strategic: The next phase of AI innovation

AI innovation is entering a new phase, defined not by the scale of models, but by the strategic use of the right model for the job.

“Large models are powerful, but they consume enormous cost and energy,” says Fred. “We’re seeing a shift toward smaller, quantized models that can run on-device—even on older CPUs—without waiting for massive infrastructure investments.”

This shift toward a multi-model, multi-agent ecosystem allows organizations to tailor performance and cost to specific use cases, especially where real-time decision-making and efficiency matter most.

Edge AI and robotics: Bringing intelligence to the field

From oil rigs to smart grids, the edge is where operational intelligence now lives.

“Six months ago, on-device models weren’t great,” Fred explains. “But with quantization and layering, you can now run powerful models on smaller devices or older machinery.”

He also highlights the growing role of robotics in safety-critical environments. Whether performing inspections or handling hazardous tasks, robots are extending human capability in the field, creating new collaboration between people and machines that enhances both resilience and safety.

From digital twins to digital triplets: Creating real-time operational insight

Fred expands on CGI’s vision for the “enterprise neural mesh”: a connected ecosystem where decentralized intelligence across legacy systems, new devices and robots forms a near real-time operational view.

“It’s what we call a digital triplet,” he says. “You can not only see what’s happening now but also simulate what would happen if you made a certain decision.”

Peter observes that this evolution helps leaders finally “see AI not as an abstract technology, but as a decision-making partner that brings clarity to complexity.”

The human side of AI transformation

Despite rapid advances, Fred stresses that human adoption remains the biggest determinant of success.

“The technology has moved beyond our human ability to absorb it,” he notes. “What we’re seeing now is a human transformation story…helping people catch up to features that have been available for months or years.”

For energy and utilities organizations, this means embedding change management, capability-building and continuous learning into every phase of AI enablement to ensure sustained business impact.

Building trust in AI: Practicing “healthy paranoia” against bias

As misinformation and bias challenge public and enterprise AI systems alike, Fred emphasizes the need for “healthy paranoia.”

“These solutions are amplifiers. They accelerate access to information, whether accurate or not,” he warns. “Healthy paranoia means filtering, validating and double-checking.”

CGI’s approach leverages multiple models in tandem, combining cloud-scale and on-premise intelligence to improve transparency, reliability and deterministic behavior in AI outputs.

Managing digital entropy through continuous data governance

Finally, Fred introduces the concept of “digital entropy”: the gradual degradation of data accuracy and usefulness over time.

“Your data reduces in accuracy and usefulness, sometimes even becoming counterproductive,” he explains. “That’s why organizations need continuous processes to clean, archive, and govern data.”

By embedding automation and governance into data management, energy and utilities organizations can sustain AI value long after deployment.

Balancing AI performance, governance and trust

For Fred, the next chapter of AI adoption is not about choosing between big and small models—it’s about orchestrating them effectively. The organizations that will lead the industry forward are those that balance innovation with governance, automation with oversight and optimism with discernment.

“The technology is ready,” Fred concludes. “What matters now is how fast we can absorb it, apply it and manage it with the right balance of trust and skepticism.”

Looking ahead: From AI intelligence to autonomous energy systems

As AI ecosystems mature, the convergence of edge intelligence, robotics and predictive analytics will continue to redefine operations in energy and utilities. Future conversations will explore how these technologies evolve toward greater autonomy, enabling not just smart systems, but self-optimizing enterprises that can anticipate, adapt and thrive in a dynamic energy landscape.

Listen to other podcasts in this series to learn more about the energy transition

Read the transcript

Introduction: AI to ROI in energy and utilities

Peter Warren:

Hey everyone, welcome back to part two of our series talking about AI to ROI. This is part of our ongoing energy and transition talks here at CGI. My guest today from part one and part two is Fred. I'll let you reintroduce yourself again, Fred.

Frederic Miskawi:

Hi, Peter. Fred Misckawi. I'm part of our global AI enablement team. I lead our AI innovation expert services globally team. I lead our AI innovation expert services globally and that gives me the deep luck of being able to work across geographies, across the world, across teams, with clients, different industries as well. I've been involved with artificial intelligence in one way or another since the 1990s.

Large vs. small language models: Finding the right fit for the job

Peter Warren:

Everybody thinks AI may have started right now, but of course, this industry has been using machine learning for a long time, so we understand the early days of it and CGI has been part of that as well. In part one we covered off a few things about is my data good enough? We touched base on the use of the best algorithm for the job, idea and concept. Don't necessarily use a sledgehammer to put in a screw type idea About KPIs. Today we're going to put in a screw type idea about KPIs. Today we're going to hit into a couple of interesting concepts the shift between everything needing to be a large language model, like one of the big multi-scalers or hyperscaler systems, versus small language models. When do you go to a small language model? When do you do things that are more deterministic and more control-based?

Frederic Miskawi:

How do you manage that decision process and what fits what? Great question, peter. And that goes back to what I was mentioning in part one which is the best algorithm for the job. We're talking about a multi-model ecosystem, multi-agent ecosystem, multi-model ecosystem, multi-agent ecosystem, and what we're seeing and what we've seen evolved very quickly, organically, is the cost and energy usage associated with these large language models can be quite substantial. So how do you manage budgets? How do you manage the overall cost of the solution? How do you manage also the response time?

Balancing cost, performance and energy use in AI systems

Frederic Miskawi:

Performance can also influence what kind of algorithm you're leveraging. So we're seeing this shift towards smaller models, on-premise models working with on-cloud or hyperscaler models. We're seeing quantization happen with those models so that you can get them as small as needed to work on a device. Even six months ago, the on-device models were not necessarily great, but you're seeing that evolve very quickly. You see the capabilities evolve very quickly to the point where now you've got NVIDIA coming out with things like Jetson 4, which will be powering a new generation of walking models, which are these bipedal robots that are coming out in the next 12 to 24 months.

Edge AI in action: Real-time intelligence for energy operations

Peter Warren:

Yeah, we just saw the I guess it was in Beijing. They had the robot Olympics for the first time and it was kind of a mix of things, but I suppose that's a very dramatic example of edge computing. I mean, our industry, both everything from oil rigs to energy production right through to smart grids, uses a lot of edge computing type of technology, not all of it being, you know, major computer systems, some of it even being a little legacy. How do you see those types of computer systems evolving as we move forward?

Frederic Miskawi:

You're going to see an evolution of that ecosystem. We're already seeing it in the algorithms. You're going to have a wide variety of systems that are powering our enterprise, powering our networks, powering our various assets across the company. We're seeing it even for us as consulting firms. We're starting to see that evolution occur as we're talking about the energy and utility industry. You're going to see this deployment of bipedal robots, increasingly powerful and capable. What we're seeing in the lab today are robots that walk and talk like you and I, very fluid, able to do martial arts or able to dance, able to move very fine objects and operate equipment in a way that is a lot more deterministic than it was in the past.

Peter Warren:

That level of ecosystem evolution is what we're seeing happen today.

Frederic Miskawi:

And what you're not seeing, what's in the labs today that we get glimpses of with the work that we do, is going to truly revolutionize the industry and the way that we operate. You're going to have very dangerous solutions and jobs that are handled increasingly by teams of humans and robots and where if a robot gets crushed, you're going to have a lot less heartaches than if a human co-worker getting hurt. So you're going to see, just by simple need this evolution occur of new algorithms, new ways of running these algorithms in our analog world occur and develop over the next two to three years.

Peter Warren:

The mining industry has been a big adopter of robotics and self-driving vehicles. This industry has been a big adopter of the quadrupeds, the, the four-legged robots, the robot dogs as they're sometimes called. We see those in harsh environments, even just looking at edge computing let's say a static device, something on the edge, making a decision about do I turn this electricity on, do I open that dam, what do I do that type of edge computing. And you made an interesting comment where you were saying that some of these new models will even work on your very old Mac. It's not that people need the latest and greatest NVIDIA technology in every case. How do you see people moving forward with some of these smaller systems, maybe on some more affordable platforms?

Robotics and quantized models: Expanding AI at the edge

Frederic Miskawi:

Yeah, and I was mentioning earlier about quantized models the ability to take a small model to begin with and then kind of streamline it, filter it, remove some of the underlying parameters in order to get it as small as possible to run on a smaller, weaker device where we can run some of these more powerful models, even on CPUs.

Frederic Miskawi:

It might be a little slower but it works. With that technology you're always going to be dealing with a statistical model, so you've got to be able to work with these smaller quantized models running on these older hardware and you've got to work with them in layering to make sure that you get a little bit more deterministic behavior out of it. And that's where agents come in. So if you have a very small model that can run on device that is really the brain of something that's a little bit more deterministic in the body of what we call an AI agent, then you have the ability to run decisions binary decisions, more complicated categorization on device. So you've got very targeted needs on device. So you've got very targeted needs and for very targeted needs you can do that on older machinery and that means that you don't have to wait. You don't have to capitalize a massive digital transformation in order to get the benefit of the technology.

From digital twins to digital triplets: The enterprise neural mesh

Peter Warren:

And you've talked about layering and I think layering you've explained the very smaller, the edge, and that's going to probably continue to expand and improve, as you mentioned, If you start and you start to look at things. You referred to a concept about enterprise, neural meshes and the use of a digital twin and actually stacking those and making a digital triplet, which is a concept that Diane Gushue and yourself have brought forward and talked about quite a bit. That really is the layering and when I explain that to a couple of executives, they've said well, finally, I see a value to me for AI because it's actually helping me versus maybe the people in the field 100% or the different layers. Do you want to explain that layering really right from the edge, right to helping the executive decide what to do?

Frederic Miskawi:

Yeah, our vision and what we're working towards, at least internally, from a client zero story perspective is the enterprise neural mesh. It's the near real-time view of the enterprise and we've been seeing inklings of that over time. But the idea is to be able to answer any and all questions that executives might have stakeholders, investors, analysts on what's happening within a given enterprise. So for us, what that means is, for example, 21, 22%, I think, of our revenue comes from IP.

Frederic Miskawi:

It's to be able to see and have a visibility of the value that's being delivered how many people are working on it, what kind of value is being delivered, the quality that comes out, and see it in a near real-time basis. These models and this decentralization of intelligence, both on legacy hardware as well as new hardware, as well as upcoming bipedal robots that will become not breathing but moving data collection engines. We're feeding all that into this digital triplet that gives you a view and understanding of the enterprise so that you can make decisions and also, most importantly, you can start running simulations where, when you have that level and type of data collection and the layering that comes, with it you can start looking at.

Human transformation: Building capability for AI adoption

Frederic Miskawi:

Well, if I were to make this decision, what would occur with the enterprise? What would be the impact of that particular strategic decision? That's what we're seeing evolve. The technology enables you to do that, it's already there and now it's becoming more of a human transformation story. The technology is moving and has moved beyond our ability to absorb it. That's what I see day in and day out. We're just working from an organizational change management perspective with teams, individuals, clients, client teams, building that capability, understanding of the technology so that we can absorb features, functions that were released several months ago, several, maybe even years ago. So that's what we're seeing right now. It's that human evolution of understanding of this technology, the absorption of the technology to work towards a enterprise neural mesh, a real-time, near real-time view of the enterprise.

Healthy paranoia: Combating AI bias and misinformation

Peter Warren:

Yeah, it's interesting. You mentioned customer zero. So for those that don't know what we're referring to there, we're doing this to ourselves. We're actually modifying how we operate internally and how we run, but simultaneously, as you mentioned, we're working with clients that are sort of first movers in those areas and touching base on it. To wrap this up, where do you think you know if you use your crystal ball? We see a bunch of both good things and bad things. Today in the news there was talking about disinformation from certain websites, specifically coming from Russia, trying to train large language models more the public ones on stuff you know that's probably propaganda. That's their point of view. How do you sort of manage all of these things as you move forward? When do you you know even your personal life? How do you manage AI and how do you see companies managing this as they go forward for, again, data quality and getting the right outcome to do the right action?

Frederic Miskawi:

Yeah, and two words healthy paranoia and, funnily enough, I had a similar conversation with my oldest son this morning on healthy paranoia. And and funnily enough, I had a similar conversation with with my oldest son this morning on healthy paranoia. These solutions are amplifiers. They accelerate access to knowledge, information, whether that information is accurate or not, whether that information is intentionally inaccurate. And with healthy paranoia, you're building a filter. You're building filtering layers between yourself and this technology and potential actors that want to effectively brainwash you. And with healthy paranoia and understanding that this technology can be used, can be used and abused for amplification.

Frederic Miskawi:

This technology is not necessarily accurate either, not always. It will get better, of course, but right now, what we're seeing is that we are dealing with statistical engines that may or may not always tell you facts and with healthy paranoia, you can start asking questions, validating information, double-checking, having multiple sources of information. So we're even doing that in our solutioning. We're bringing in multiple models, multiple hyperscaler models as well as on-premise models within the confine of the same solution, again to help with deterministic behavior and more accuracy, dealing with biases and ensuring transparency. So we're going to continue to see these news stories come out and this technology will be used to manipulate will be used to steer groups of humans toward a particular realization, and it's up to all of us individually to realize that this technology is powerful. This technology has to be questioned and we have to build healthy paranoia as we move forward.

Peter Warren:

Yeah, no, it's a good idea, Even when you're using just the data within, and we'll wrap up here in a second. Talking about going back to the first question about data data quality and governance, is that, you know, in organizations maintaining the data you said you did in the first part, we didn't have to start with pristine data, but it's continuing to evolve so that requires a bit of change management in the organization. So we see the companies that are being most successful with this type of technology are being very agile in the way they work or being restructuring things, managing the use of this. They're responding to the data, but how do they go forward on maintaining this forest of data that is continually growing and getting weeds? How do they deal with that on a day-to-day basis?

Digital entropy and data governance: Maintaining AI value over time

Frederic Miskawi:

Well, I think number one is embrace the idea that there is such a thing as digital entropy, that there is such a thing as digital entropy.

Frederic Miskawi:

So with digital entropy, the idea is that over time your data will continue to reduce the level of accuracy that it has, the level of usefulness that it has, and to the point where that data may actually be counterproductive to your business goals.

Frederic Miskawi:

So when you do that and you have that healthy paranoia with your systems and digital entropy, you're putting in layering in place to make sure that this data is being catered to in a more automated fashion.

Frederic Miskawi:

What we've been seeing over time, through kind of historical anthropology of that data, is that it gets old, it gets stored, it gets layered, it may be abused, it may be reused and all that were, you know, following human processes. Now these systems need not just quality data, they need data, and the more data the better, and they can infer patterns based on the data that gets ingested. But you want to be able to cater to that data, to make sure that you have the ability to collect the data, you have the ability to cater to the quality of the data in case there is known sources of data that is not conducive to your business goals, to eliminate the data, archive it if needed. These processes, processes, this layering that you put in place all is there to manage digital entropy. So if you know and understand that there are natural organic slash digital processes in place, that's going to make you understand that you have to have healthy paranoia and that you have to put these solutions and layering in place to manage the ecosystem.

Closing thoughts: Building trusted, resilient AI for the energy transition

Peter Warren:

I think that's a great spot to stop. Fred, thank you very much for today's conversation, in both part one and part two. Thank you everyone for listening and we will see you again in our podcast series on the ever-evolving energy transition. Thanks very much. Bye-bye.

Frederic Miskawi:

Thank you everyone.