As a data scientist and AI practitioner, I am excited to see so many positive AI use cases happening and being leveraged to bring quick information and insights to experts and business people – the potential of the rapidly evolving AI technology is truly limitless.
We must continue to encourage and advance AI innovation as the value it brings to improve our lives and businesses is still relatively unrealised. But, at the same time, we must advance AI responsibly, which means AI development must apply rigor and risk management to ensure solutions are accurate, inclusive, transparent, and safe.
It is an evolution, not a revolution
AI also isn’t new, nor is it magic. In fact, generative AI and large language models (LLMs) are a progression of the insights you can get from conversational AI (chatbots) and data to solve complex problems. Such insights span the spectrum from reporting to forecasting to machine learning, and AI solutions algorithms can be trained to continuously learn as more data/information is available and to mimic human reasoning.
At CGI, we’ve been using AI for years to help people gather insights and make informed decisions. Over time, the terms have evolved from things like expert systems, neural networks, cascading models, and decision support systems. Among AI’s many proven uses cases are chatbots, weather predictions, automated credit checks, automated fraud detection, predictive maintenance, and AI-driven diagnostic image analysis. For example, AI is brilliant at seeing changes in images that are hard to detect with the human eye, such as minute brain bleeds in CT scans.
In our global Voice of Our Clients research
While some people have expressed concerns about the potential for losing jobs to AI, we believe AI will help us address growing talent and resource capacity shortages – particularly as more Baby Boomers retire and other demographic shifts point to workforce shortages. In our global research, 80% of the executives interviewed cite IT recruiting challenges.
AI can and should be used to help redesign how we work. AI has the potential to take on mundane repetitive tasks and supplement experts, not replace them. AI is not a solution to replace humans from all workflows due to AI’s inability to accurately mimic complex human reasoning and empathy or consider human relationship factors. All of these are difficult to model programmatically, as much of this human reasoning is based on intuition, reactions and interactions – the very things that make us human.
Additionally, using AI to support decision-making requires some form of human monitoring or intervention to ensure things stay on track. For example, in collaborating with British Columbia health leaders to develop a chatbot to help answer constituent and health worker questions about COVID 19, we watched the chatbot we trained very closely to ensure it yielded accurate and in-context results. While the technology gets smarter with the more interactions it has, there is still a need to monitor and ensure the responses stay on track.
What’s new is generative AI
Large language models (LLMs), such as those used in generative AI, expand on what we see in conversational AI and intelligent automation. Theses models can analyse, process, and synthesise huge amounts of data in all forms and provide in-context outputs quickly. The big advancement with generative AI is the ability to produce something new.
There is a paradigm shift from automation to creation. Apps like ChatGPT, OpenAI and Google’s Bard create an easy interface to make AI more accessible to citizens. What’s more, the models are now capable of deploying at scale.
We see this as a huge opportunity; but one that requires moving forward with caution and a way to mitigate risk. CGI is actively exploring opportunities to leverage this technology responsibly. We’ve built models and solutions that evaluate the accuracy of AI responses over time to provide both transparency into the source of information and to leverage in-context training sets to clearly provide efficiencies and opportunities to extend this evolving technology for meaningful tasks.
Satisfying the accelerating digital needs of customers and citizens
Today’s consumers are quite savvy and expect this sort of innovation to supplement their lives, such as with their smart devices, personalised services, and data and AI-driven apps for banking, energy consumption, shopping, and fitness – all at their fingertips.
In our global research, the top trend overall cited by executives is becoming digital to meet customer and citizen needs. Now is a perfect opportunity for AI innovation as the public is ready, and the technology is here. We just need to understand how to use it responsibly and ensure we have in place both a solid data strategy and access to reliable data.
Government and industry have invested heavily in data over the last 50 to 60 years. Cloud capabilities have advanced to provide the necessary processing speed, scale and security. And, cloud providers continually bring new AI tools to the market that are closer and more accessible to directly support business decision makers.
Critical to the success and scalability of these technologies is having a holistic data and data/AI governance strategy; yet just 1 in 5 executives we interviewed extend their data strategy across their value chain and partners. Also, most executives (83%) are focused on improving data quality, management, and governance to advance their data strategy.
Responsible use of AI
Building and using AI responsibly requires several elements, including:
- Using an AI risk model and consistently applying academic-level rigor (e.g., statistical relevance, low bias, low variance, etc.).
- Finding the right use cases focused on real business problems and ensuring problem statements are well defined and the technology solution and data are appropriate to solve those problems.
- Establishing solid governance for data and AI to ensure that data fed into AI models is correct. (Interestingly, 80% of our work tends to be around the data for AI vs. AI coding.)
- Ensuring problem statements and data for training are representative of the appropriate populations, inclusive and free from bias. For example, if using data from a specific geography, is there a risk that age, gender, and ethnicity may not be evenly represented?
- Protecting data privacy, security, intellectual property, and data rights.
- Providing transparency to the source of information as well as any context to the information being gathered, manipulated, and returned as a response.
- Ensuring business and subject matter experts are included in the interpretation of responses to ensure the insights are meaningful.
Once these guardrails are in place, organisations can benefit from AI’s vast potential to enable faster and easier access to data synthesis to solve business problems and support decisions. Another area of opportunity is improving communications through more personalised customer, citizen, and employee experiences. A key benefit is in reducing menial tasks and supplementing and supporting people’s work, particularly as all industries face talent shortages. And, of course, AI-powered processes help save time and reduce costs through greater process efficiency and productivity.
As shared above, we see some truly life-changing benefits of AI in healthcare in more rapidly and accurately scanning large, complex data sets (in addition to diagnostic imaging, there are applications in genomics) and advancing personalised medicine. Example use cases in other sectors include:
- Detecting changes in equipment before a failure to extend the useful lifespan of assets as well as to ensure safety of equipment operators
- Personalised services to customers focused on specific needs
- Improving marketing insights
- Detecting and preventing fraudulent activities
- Simulating crowd and citizen movement to support transportation and demographic planning for municipalities and events
Here are some additional use case examples we're supporting:
- Improving water quality for communities
- Advancing predictive maintenance in manufacturing
- Facilitating situational awareness for first responders
- Preventing ship-whale collisions
Recommendations for ROI-led investment
As you consider your next steps toward pursuing AI innovation, I’d like to leave you with several ROI-based success factors:
- Implement an AI risk model to ensure the model is accurate and the outputs are meaningful.
- Start small and scale.
- Initiate AI solutions with a clear problem statement and ensure the data and models support the questions being asked.
- Tap the power of the cloud to speed and scale processing – especially for those organisations that have siloed data and want to visibility across the enterprise.
- Consider AI in redesigning work, asking how you can do things smarter and replace manual tasks, and what employee skillsets are needed to succeed with AI.
We believe organisations can benefit immensely by using AI to solve problems faster than before. But there must be appropriate guardrails. This requires taking the steps outlined above, as well as staying abreast of evolving regulations. We also need technological advances to help us debunk any irresponsible uses of AI.
At CGI, we help clients advance the responsible use of AI through our proven risk framework and by helping them upskill their people. In this way, we continue to apply AI for real-world benefits, running toward AI, not away from it.Back to top