Lucille Bonenfant

Lucille Bonenfant

Vice-President and Chief Privacy Officer

Diane Gutiw

Diane Gutiw

Vice-President and Global AI Research Lead

Every January 28, organizations around the world celebrate Data Privacy Day (also known as Data Protection Day). Data Privacy Day commemorates the first international treaty governing data privacy, signed on January 28, 1981.

Back then, legal requirements encouraging businesses to respect privacy were limited. Over the next decades, we experienced across the globe many regulatory developments and advances in how organizations safeguard data to better protect individuals. Yet, with advancing technologies—particularly GenAI—we must be even more vigilant. This day serves as a reminder of the ongoing effort required to raise awareness of and promote privacy and data protection best practices, regardless of the technology.

Data privacy and AI obligations come in many different forms, including regulation, contractual agreements, rulings, authorities’ recommendations, industry standards, etc. The approach and requirements may differ by region, country, and business sector, and are ever-changing. They also are interpreted and applied differently, whether by organizations, judges, data protection authorities, auditors or even associations that influence standards and best practices.

As the Chief Privacy Officer and AI Research Center Lead at CGI, we spend a lot of time working with teams across our global operations to not only keep pace with, but to stay ahead of, the rapidly evolving technologies as well as the privacy-friendly AI landscape.

CGI uses a Responsible Use of AI (RAI) framework for developing data-driven decisions and outputs, which leverages the best practices and ethical principles of scientific research. This RAI framework includes clear and enforceable guidelines, as well as a robust risk matrix as shared in this blog: Guardrails for data protection in the age of GenAI. We also actively engage in AI governance activities, such as being a signatory of Canada’s Voluntary Code of Conduct for Artificial Intelligence and staying engaged in Frontier AI (emerging AI technology) discussions.

As we commemorate Data Privacy Day 2024, we’d like to share a few best practices your organizations can likewise follow—particularly as the power and potential of AI grows in many areas across all industries, yet also introduces risks for data, users, and fundamental rights.

Best practice #1: Establish data privacy and AI governance teams.

Arguably the most important best practice for any organization is setting up a governance model for monitoring data privacy and AI requirements and ensuring compliance. This model should extend across the enterprise and bring together business, data and technology/data science experts, including privacy, AI, legal, and security professionals.

Governance teams should focus on global and local requirements, tracking and analyzing updates and differences. With a thorough understanding of existing and emerging requirements, the teams should develop policies and processes to ensure compliance at both the global and local levels, as well as oversee their implementation.

As a global organization, we established a global privacy program to facilitate compliance analysis. It is frequently updated and applicable to all CGI entities and professionals aligned with the requirements of the most stringent legislation. For AI implementers, this global lens provides a single compliance framework for AI usage and solutions, thus enabling CGI to have a standard organizational approach regardless of the location of the team or solution.

Despite differing regional requirements, global and local policies and processes should be consistent, as much as possible, in terms of their substance. Further, they should be applied in a consistent manner across the enterprise. Teams working in silos and creating disparate policies and processes will prevent an enterprise-wide view of compliance and a standardized compliance approach, both of which are critical to effective compliance, while retaining the utmost confidence of all stakeholders.

Back to top

Best practice #2: Monitor continuously and agilely.

Because legal requirements are evolving at a fast pace, continuous and agile assessment is required. Even within the duration of a single project, you can face challenges in evolving requirements that affect the project and require quick adaptation and efficient change management. Constant monitoring is essential, along with a readiness to pivot in response to new requirements.

At CGI, we’ve built a Data Privacy Landscape platform that enables our professionals to identify, by geography, applicable data protection legislation and restrictions, and to access high-level compliance guidance. It’s available to our professionals via our company intranet. We also benefit from industry associations such as the International Association of Privacy Professionals (IAPP) and external law firms, where we have access to newsletters, privacy perspectives and continuous regulatory updates.

Once new privacy and data protection requirements are released, you need to not only be aware of and understand the impacts on your organizations, but also be prepared to adapt your AI-focused policies, practices, systems, processes, and tools to accommodate the changes. This requires instilling a change management mindset within the organization, along with a clear communication to make the changes relevant to everyone affected.

Ensuring that the risk management approach is streamlined, intuitive, and easy to use at each stage of an AI engagement empowers teams to be more confident that they have addressed AI risks (e.g. ethics, privacy, security, transparency, explainability, reliability, etc.) from a new initiative’s inception to its rollout and ongoing operations.

Back to top

Best practice #3: Invest in training.

Investment in enterprise-wide training and professional development on emerging technologies such as AI is essential to ensure your professionals master technologies and best practices and make data protection an everyday priority. Your professionals must have the knowledge and skills to keep pace with related technologies and associated legal requirements.

Training can consist of sharing knowledge among internal teams systematically and ongoingly through, for example, practical applications of AI risk mitigation through working groups and forums. It also can involve classes and certifications through formal learning programs. CGI is investing in all these efforts and more to ensure our professionals are fully qualified to improve efficiency, to automate aspects of everyday activities, and to support clients in a responsible way.

Learn more about CGI’s privacy and data protection program by consulting the privacy section on our website, and visit our artificial intelligence blog for more insights. We also welcome you to contact us to continue the conversation.

Back to top

About these authors

Lucille Bonenfant

Lucille Bonenfant

Vice-President and Chief Privacy Officer

In May 2021, Lucille Bonenfant was appointed CGI’s Chief Privacy Officer, overseeing the company’s global data protection strategy, enterprise-wide data protection policies and procedures, and data protection regulatory compliance.

Diane Gutiw

Diane Gutiw

Vice-President and Global AI Research Lead

Diane leads the AI Research Center in CGI's Global AI Enablement Center of Excellence, responsible for establishing CGI’s position on applied AI offerings and thought leadership.