Jonathan McPhail

Jonathan McPhail

Vice-President, Corporate Services - Architecture and Design Advisory

Dennis Russell

Dennis Russell

Director, Corporate Services - Architecture and Design Advisory

Vijay Chevendra

Vijay Chevendra

Director, Corporate Services - Architecture & Design Advisory

Chrissie Shum

Chrissie Shum

Director, Consulting Intellectual Property - Canada

Most organizations are not short on ideas for applying AI. Instead, the challenge is moving AI from proof of concept (POC) to production. Promising POCs often stall as teams wait for missing specifications, which are the human-sanctioned intent that guides implementation by humans or AI agents. Often, the root cause is deceptively simple: teams lead with technology rather than the business problem and end up building the wrong thing. An equal shortfall is neglecting the human context within an integrated AI experience. The result is what we call AI experience debt: design gaps and usability oversights that slow human-centered AI adoption and lead teams astray.

The challenge in enterprise software delivery is twofold: build the right thing and build it right. Achieving both requires integrating AI responsibly across the delivery lifecycle, from research and requirements through to design, architecture, build and deployment, while continuously learning, aligning and adapting. Each stage is being reshaped by AI, and the boundaries between them are becoming more fluid. 

In this blog, we focus on the early stages of the cycle: 

  • Understanding the problem and data landscape
  • Shaping ideas into clear specifications
  • Using rapid prototyping to build alignment, user validation and stakeholder confidence

These stages set the foundation for everything that follows. Rush past them, and even the best engineering can’t compensate.

Activating the flow through a human-centered innovation engine  

Conversations about AI in software delivery often emphasize keeping the “human in the loop,” but pay less attention to keeping the customer, end-user and UX designer involved alongside the developer. Most organizations lead with technology and retrofit the human experience later. We chose a different path for our software delivery cycle. Instead, we used a cohort-based, human-centered program balancing business value, human experience and technical solutions. Here’s what we learned.

We didn't start with a grand framework in mind. We started with the strategic challenge: move quickly and at scale beyond workshops and accelerate the translation of AI use cases into working POCs. What emerged was a design-led, iterative cycle bringing together business owners, UX designers and a lean AI squad across three core stages.

Cornerstone - Textile close-up

Stage 1: Challenge narration and journey discovery

We assembled teams focused on applying AI to similar business problems and started with the user journey. Instead of filling out complex forms, business owners simply narrated their challenge. Their stories became the foundation for defining journey maps, clarifying personas, identifying pain points and uncovering opportunities for AI to deliver value.

Cornerstone - Textile close-up

Stage 2: UI prototyping and vision alignment

Our UX design team rapidly translated each journey into user interface (UI) mockups and clickable prototypes. These visuals achieved two things at once: they gave users something tangible to validate against their real needs, and they gave leaders and developers a shared vision. That shared vision built stakeholder confidence so that the implementation could become usable, trusted and adopted. 

Cornerstone - Textile close-up

Stage 3: Rapid AI proof of concept development

Our lean AI squad brought each POC to life within one month, but their work began well before any model selection or code writing. During UI prototyping, the critical investment was having AI developers available so that the business problem and intended value were well understood. With a sharpened understanding of the problem statement, they had a clear objective against which they could verify results. The squad not only tested the hypothesis effectively, but also evaluated and offered multiple implementation options, with accompanying trade-offs, for stakeholders to consider. 

This discipline—distinguishing a problem worth solving from a problem worth solving with AI—shaped every engagement. From there, the squad grounded each solution in reality by understanding the data landscape, evaluating the nature of the challenge, and choosing the right AI technique rather than defaulting to the most familiar tool. By staggering cohorts, we delivered two to three new POCs each month. 

Human-centered design yields unexpected results

AI Cornerstone purple - person holding laptop

We were often surprised by the results. In one case, legacy vision models transformed UI concepts into solutions that surfaced reference images directly within chat responses.    These were not planned innovations. They emerged from staying close to the problem and being willing to challenge the assumed path to a solution.

What made this work was engaging UX designers from the start. By turning ideas into visual representations of the intended human experience, we drove deeper discovery for both the problem and the solution. The UX designers helped us avoid AI experience debt. 

Additionally, the human perspective was not an afterthought. It was the starting point. Over time, shorter design cycles, early user validation and growing stakeholder confidence became the engine that kept the program moving.

From proof of concept to production-ready: Where is the specification?

While we proved we can innovate at scale, we soon hit our next obstacle: progressing POCs to production. Sure enough, the first question from the implementing team was, "Where is the specification?" In other words, where were the implementation instructions? Our rapid POC documentation, while useful, wasn't something another team or coding agents could readily pick up and implement.

Rather than going back and analyzing user flows or business rules to author specifications for dozens of projects, we generated them with AI. Our hypothesis was: “Given a curated knowledge base of design patterns, an AI agent can generate consistent user stories and technical specifications directly from conversation transcripts.” 

Using context engineering and foundation models, we built a knowledge base around AI implementation design patterns, organized by archetype, from conversational agents to anomaly detection and intelligent workflow automation. In alignment with CGI’s “baked in, not bolted on” approach, we addressed security, data privacy and governance from the very beginning rather than treating them as an afterthought. 

AI-generated and human-authored specifications are complementary

Cornerstone with bubbles and AI consultant

The results were directionally correct and immensely valuable. They covered typical flows, exceptions, system components, interaction diagrams and even user journey maps with sentiment scores drawn from the narration, along with acceptance criteria and initial test cases.

What this taught us is that AI-generated and human-authored specifications are complementary, not competing. Together they form the human-sanctioned intent for implementation. 

The AI agent excels at structuring the why, the who, the sequence, and how we know we're done. However, human supervision remains essential to ensure what is built stays aligned with expectations and intended outcomes. The human architect  ensures what's generated stays grounded in business reality. As agent-native development workflows mature, these specifications serve human teams while providing AI agent-readable intent, making the investment in a well-curated knowledge base doubly important.

The continuous flow of software delivery

This journey is leading us to use specifications in new ways by guiding agent-native development through specification-driven development (SDD) and powering accelerators that generate UX designs and automated test scripts. Where “vibe coding” relies on intuition (explain what you want in plain language and let AI generate the code). SDD moves teams toward intent engineering by defining requirements before code is written or generated. If the earlier stages are about building the right thing, SDD is about building it right, turning clear specifications into reliable software. Our companion blog, Spec-driven development: From vibe coding to intent engineering, explores this next stage in detail.

As agentic platforms mature, the architect’s role is also shifting from authoring every specification to curating context, including building knowledge bases, defining guardrails and keeping AI-generated output aligned with business strategy.

Key lessons from our journey

  • Starting with the problem and human experience,  not the technology, changed everything. By beginning with the user journey and challenge narration, we avoided accumulating AI experience debt and stayed focused on building the right thing.
  • Design created the alignment we couldn't achieve with documents alone. Rapid prototyping gave users something to validate early and gave leaders a shared vision. That shared vision is what converted momentum into stakeholder confidence.
  • Building our knowledge base early paid off. Curated knowledge spanning security, data privacy, governance, architecture and user experience principles enabled AI to generate the specifications needed to build it right.
  • Innovation at scale isn't a one-off workshop. It requires a repeatable engine connecting business problems, design thinking and AI development, along with a willingness to evolve the approach as you go.

Keeping humans at the center

Our journey is far from complete, and that is precisely the point. We're expanding our UX capabilities into rapid prototyping backed by prompt design, including enablement programs and bi-weekly UX and AI design jams that are strengthening collaboration between our design and development teams. 

We continue to advance our specification-generation approach through multi-agent orchestration, finding the balance between guiding human teams and shaping AI agent-readable intent. While the current of change is strong, by staying close to the problem and keeping users at the center, the direction becomes clearer.

For a deeper exploration of why this matters at the board level, listen to CGI CTO Dave Henderson's conversation with John Davis and Victor Foulk in our podcast, From AI to ROI: Reimagining software delivery for real business value.

 

 

Back to top

About these authors

Jonathan McPhail

Jonathan McPhail

Vice-President, Corporate Services - Architecture and Design Advisory

Jonathan McPhail leads the Architecture and Design Advisory team in CGI's Office of the CTO. In this role, he focuses on evolving architectural practices for AI-augmented change.

Dennis Russell

Dennis Russell

Director, Corporate Services - Architecture and Design Advisory

Dennis “Denny” Russell is the UX and AI Lead within CGI’s Office of the CTO, operating at the intersection of AI, product strategy, innovation and user experience. He partners with leaders and solution teams to transform emerging technologies into practical, responsible and user-centered solutions that ...

Vijay Chevendra

Vijay Chevendra

Director, Corporate Services - Architecture & Design Advisory

Vijay Chevendra is a Director in CGI's Office of the CTO, leading solution architecture and design advisory. His experience spans both the business and technology aspects of large-scale transformations.

Chrissie Shum

Chrissie Shum

Director, Consulting Intellectual Property - Canada

Chrissie leads operations for CGI’s Toronto Financial Services Business Unit. She serves as a strategic value driver across the business and is responsible for operational excellence, execution discipline, business performance and value delivery.