In our latest Voice of Our Clients research, 85% of Defense and intelligence leaders share that their organizations have adopted enterprise-wide or function-specific AI strategies. Adoption is high, but trust must be higher.
Across ministries, commands and agencies, one challenge stands out: speed without proof erodes confidence. Decision-makers need recommendations that arrive with their evidence and approvals that move cleanly into accountable work without breaking governance or adding overhead.
In this model, AI acts as an advisor to the human in the loop, providing verified, traceable recommendations that accelerate decision-making while keeping authority and accountability firmly with the operator. This “four-eye principle” ensures that every action remains human-led and AI-informed, balancing operational speed with responsibility and oversight.
We call this verification-first AI: a practical operating model where every recommendation is traceable and ready to execute. This approach aligns with NATO’s six Principles of Responsible Use (PRUs) for AI in Defense—lawfulness, responsibility and accountability, explainability and traceability, reliability, governability and bias mitigation—and with the EU AI Act requirements for high-risk systems, including risk management, human oversight and logging.
Designing for verifiability
Verification-first AI begins with design. Every recommendation must be proven before it is approved.
- Provenance before approval: Recommendations surface with sources, confidence checks and policy context, allowing leaders to understand why an option is suggested and what it depends on.
- Explainable by default: Outputs carry verification panels showing what data was retrieved, what was checked, and what thresholds were met, so explainability is built in, not reconstructed later.
- Fit for audit: Each accepted or rejected decision carries an evidence package, allowing oversight bodies to trace what was known, when and by whom.
This foundation makes every action transparent, reproducible and auditable, which are core to maintaining confidence.
Proven capabilities: AI Felix and CGI AIOps Director
CGI brings NATO’s responsible use principles to life through AI Felix and CGI AIOps Director, which demonstrate how verification-first AI operates in mission environments.
- Lawfulness, responsibility and accountability: Operates in NATO SECRET-grade environments, disconnected from the internet, with governance frameworks and life cycle control that ensure full compliance and accountability.
- Explainability and traceability: AI Felix delivers task-level traceability, while CGI AIOps Director adds logging and orchestration, creating full visibility into what data was used, when and by which system.
- Reliability and governability: CGI AIOps Director manages model lifecycles and operational reliability, while AI Felix integrates these governed models into mission workflows to maintain consistency and trust.
- Bias mitigation: CGI AIOps Director supports bias checks during model training, helping reduce potential bias early in the AI lifecycle and laying the foundation for future end-to-end bias governance.
- Provenance before approval: Provenance is supported through CGI AIOps Director’s logging and orchestration capabilities, while AI Felix accelerates workflows, moving toward the verification-first goal of attaching full evidence packages to recommendations.
- Audit-ready decisions: Combined life cycle management, logging and task traceability enable auditability and retrospective analysis, ensuring decisions can be reviewed for what was known, when, and by whom.
- Interoperability, coalition and mobility: AI Felix supports NATO Federated Mission Networking (FMN), and CGI AIOps Director operates across secure, multi-domain environments, enabling coalition-ready, verifiable AI operations.
These solutions combine governance, transparency and interoperability to build trust at speed.
Closing the approval-to-action gap
In mission environments, bottlenecks are rarely ideas. It’s the handoff from approval to execution. Verification-first AI accelerates this process by embedding traceability into the workflow itself.
- When a course of action is approved, tasks are generated automatically and assigned to the right teams.
- Evidence trails are preserved by default, capturing artefacts even under pressure.
- Decisions and their evidence are portable across partners, aligning with NATO’s FMN goals to enhance interoperability and readiness.
The results are faster, accountable decision cycles that maintain governance integrity and operational momentum.
Operating where missions happen
Trusted AI must function within the realities of classified mobility and coalition interoperability. Verification-first patterns are already proven for SECRET-grade operations, supporting parallel domains on a single device, secure remote work and exchange of evidence packages across partners.
These principles reinforce NATO’s Digital Transformation Implementation Strategy, advancing digital resilience and multi-domain effectiveness.
Measuring impact
Verification-first AI delivers clear performance improvements:
- Higher decision reproducibility: Decisions can be verified and repeated, reducing the need for re-approvals and lowering audit findings.
- Faster approval-to-action cycles: Verified recommendations move quickly into execution, cutting delays and minimizing policy deviations.
- Stronger operational resilience: Evidence travels with decisions across organizations, ensuring continuity and trust even under mission pressure.
This turns trust into a tangible outcome essential for sustained operational advantage.
How CGI can help
CGI helps defense and intelligence leaders deploy decision systems, classified-mobility frameworks and mission-ready AI capabilities that operate under verification-first principles.
We deliver:
- Decision support with built-in retrieval, verification and tasking
- SECRET-grade, multi-domain mobility through managed services
- Deployable, secure multi-user access kits for the edge
With CGI, clients achieve mission speed AI with proof, delivering trusted outcomes from headquarters to the field.