Lance Schone professional photo

Lance Schone

Director, Consulting Expert | U.S. IP Solutions

State and local governments are under growing pressure to move artificial intelligence (AI) from pilot programs to real-world operations. Legislators want measurable progress, residents expect faster and more responsive services and agencies face ongoing budget and workforce constraints.

But unlike commercial organizations, governments cannot afford to treat AI as a “move fast and fix it later” technology. Public-sector leaders are accountable for decisions that directly affect citizens, including benefits and payroll, procurement and public safety. In government, trust matters more than features.

That is why many public sector AI initiatives stall. The challenge is not whether AI can generate value, but whether it can operate transparently, responsibly and under public scrutiny. Governments are not simply evaluating AI capabilities. They are evaluating whether the technology can be trusted.

Why the trust gap around AI projects is so wide

“Move fast and break things” has become a common motto associated with tech startups – they move quickly, test aggressively and refine later. Government leaders operate in a different environment. They are responsible for decisions that impact citizens and are held to rigorous standards, especially those related to benefits, taxes, payroll, procurement and public services. When something goes wrong, the consequences extend far beyond a missed product milestone.

In one recent client discussion, interest in AI capabilities quickly gave way to questions about governance, auditability and bias mitigation. The conversation was not centered on what the technology could do, but whether agencies could confidently manage and oversee it.

This is what separates seasoned government practitioners from their commercial counterparts: the focus on real governance needs first. 

What government leaders prioritize

When state and local government agencies evaluate AI projects, these are usually the first questions they ask:

  • Can we clearly see when AI is being used and understand what it is doing?
  • Can we explain how outputs were produced and who is accountable?
  • Can we disable capabilities without disrupting core operations?
  • Can the system align with our policies, controls and audit requirements?
  • Will this approach hold up as regulations and expectations evolve?

That trust-first mindset showed up clearly in a recent RFI (Request for Information). The AI-related questions were not about functionality, but governance, privacy, security, auditability and risk mitigation. For governments, use cases matter but only after trust is established

What government-ready AI really requires

For AI in state and local government, readiness comes from architecture and controls, not marketing claims. In practice, government-ready AI must be built with:

  • Transparency: Users know when AI is involved and where information comes from
  • Human-in-the-loop controls: People remain responsible for final decisions
  • Policy alignment: Agencies can configure or disable capabilities as needed
  • End-to-end auditability: Teams can trace inputs, prompts, outputs and resulting actions

These are not optional extras, but a baseline for adoption. If a platform cannot support them natively, government leaders will continue to treat AI as a risk to manage rather than a capability to scale.

Why embedded AI matters in government solutions

This is also why built-in AI matters more than bolted-on AI for government solutions. Public sector enterprise systems stay in place for years, sometimes decades. Leaders need to know that AI can evolve without forcing a new technology project every time a policy changes or a capability expands.

When AI shares the platform’s security model, governance framework and audit structure, agencies keep control as the technology changes. That is a far more credible long-term approach than attaching a new tool to the side of an existing system. This is the direction the market is moving toward, with AI being embedded directly into core government platforms with built-in governance, transparency and role-based controls rather than introduced as separate tools.

For state and local government, AI adoption is a trust journey. Agencies often start with one use case, a limited group of users and a deliberate review cycle. That is not resistance. It is responsible leadership.

To explore how modern AI ecosystems can align technology, governance and workforce readiness, read our viewpoint on Navigating an AI ecosystem: Scaling for impact.

About this author

Lance Schone professional photo

Lance Schone

Director, Consulting Expert | U.S. IP Solutions

Lance is a Technology Product Manager for the CGI Advantage ERP program. Lance is responsible for driving the technology strategy and roadmap for the Advantage suite of products. With over 20 years in the tech industry, covering a broad spectrum from solution architecture to product ...