Koen van Kan

Koen van Kan

Conversational AI Consultant bij CGI Nederland

Nederlandse versie

How cognitive forcing improves human-AI decision-making

Artificial Intelligence (AI) systems are increasingly assisting humans in decision-making, but overreliance on AI suggestions can lead to negative outcomes, from wrong turns to life-threatening situations. It is crucial to counter this AI overreliance and empower humans to use AI for better decisions, especially in scenarios involving risk. Cognitive forcing, rooted in human decision-making psychology, is a best practice approach to introducing countermeasures against AI overreliance. This blog illustrates how cognitive forcing can help prevent AI overreliance.

Human-AI Decision-Making

Using AI-generated suggestions for day-to-day tasks has, knowingly and unknowingly, become a regular part of our lives. From choosing what movie to watch on our favorite streaming platform, finding the quickest route to reach our destination, to getting suggestions for birthday presents, AI is deeply ingrained into our everyday decision-making. While these everyday settings are relatively mundane and allow for a certain tolerance for error, not all decision-making contexts share these characteristics. AI also assists in detecting brain hemorrhages1, engages in autonomous driving, and enhances military decision-making2. In such high complexity and high stakes contexts, optimal decision-making is vital. It is therefore crucial to understand the dynamics of human-AI decision-making to ensure desirable and safe outcomes.

The Problem of AI Overreliance

Combining artificial intelligence with human intelligence does not always lead to better decision-making. People often adopt AI-suggestions even when those suggestions are wrong, and the person would have made a better choice on their own. This phenomenon is known as AI overreliance3. When humans adopt incorrect AI-suggestion, the results can be disastrous.

This issue is especially relevant in complex, high-stakes decision-making. Recently, the European Union introduced the AI Act, which classifies AI solutions by risk levels—minimal, limited, high, and unacceptable—each requiring tailored approaches to risk mitigation4. Although specific to the EU, this framework highlights global concerns about the responsible deployment of AI in complex, high-stakes contexts. High-risk AI systems, despite their benefits to individuals, organizations, and society, can endanger human health and safety if not carefully managed. Examples of high-risk AI systems include medical diagnostics, aviation safety protocols and autonomous vehicle navigation. Mistakes in these contexts can lead to permanent physical, financial, or reputational damage—or even result in loss of life.

As the use of high-risk AI systems in decision-making is projected to grow5, addressing AI overreliance becomes essential. To understand the root causes of this overreliance, we will first explore the psychological mechanisms underlying human decision-making and the tendency to trust AI recommendations—even when caution might be warranted.

The Role of Heuristics

We are less rational than we think we are in decision-making. Human bounded rationality is the result of a reliance on easy, efficient rules—heuristics—when making decisions. In heuristic decision-making, humans use simple, low-effort (unconscious) mental shortcuts instead of consciously evaluating different options. Essentially, we settle for what works ‘well enough’. This ensures that we conserve our cognitive energy, allowing us to navigate the complexities of daily life without becoming overwhelmed by the need to analyze every detail.

However, this dependence on heuristics is a double-edged sword. While it allows us to function efficiently in many situations, it also leaves us vulnerable to judgment errors. Excessive dependence on these mental shortcuts can result in misjudgments, particularly in complex or unfamiliar contexts where a more thorough analysis is required. This tendency to use cognitive shortcuts not only affects ordinary decision-making but also extends to how we interact with AI systems.

AI systems are often built with the assumption that users will carefully analyze their suggestions. However, this assumption falls short, as this process requires significant cognitive effort, which people tend to avoid. Instead, individuals tend to form estimates about the overall competence of the AI system3. The perceived competence is then used as heuristic in adopting the AI systems’ suggestions. As a result, once users perceive an AI system of having a certain level of competence, they become prone to overreliance on its suggestions. Teams building AI systems that support decision-making would benefit from incorporating countermeasures against AI overreliance into their systems.

Cognitive Forcing as a Solution

Cognitive forcing interventions can be used as countermeasures against AI overreliance. Cognitive forcing entails intervening at the decision-making moment to disrupt human heuristic reasoning and thereby forcing the user (i.e., human-in-the-loop) to engage in analytical thinking. In other words, users are forced to shift from automatic (unconscious) decision-making into a more active, conscious, and critical mode. Such interventions have a proven track record in the medical field, enhancing diagnostics and treatment decision-making for both individual practitioners and medical teams6. Recent studies found that this technique can also be applied at the moment a human is provided with AI-suggested output in a decision-making process. This successfully counters the mechanism of AI overreliance, resulting in more discerning evaluation of AI outputs and ultimately better AI supported decision-making7.

Three types of cognitive forcing interventions have been proven effective:

  1. Checklists: These enhance decision-making in AI-assisted environments by requiring users to verify key information before accepting AI suggestions. For example, in medical diagnostics, a checklist prompts users to compare symptoms with the AI’s suggestions, preventing blind reliance on AI.
  2. Diagnostic time-outs: Pauses in decision-making where users critically review AI suggestions. In customer service, an agent could pause to review an AI-generated chatbot response to ensure it is empathetic and accurate before sending it.
  3. Ruling out alternatives: Users must evaluate and reject other options before accepting AI suggestions. In finance, this could involve comparing investment options to validate the AI’s suggestion.

Quality collapse as side effect

While overreliance on AI leads to suboptimal decision-making, this is not the only concern. AI models often learn from their interactions with users, continually updating to improve. However, when users accept incorrect AI outputs without question, these errors can become part of the AI’s learning process, making it less accurate and reliable over time. As a result, the quality of AI suggestions may degrade, ultimately diminishing their value in human-AI decision-making.

Balanced Judgement

As AI becomes increasingly embedded in our decision-making processes, it is crucial to strike a balance that maximizes its benefits while leveraging human judgment effectively. This blog has highlighted the risks of relying too heavily on AI suggestions and introduced cognitive forcing interventions like checklists, diagnostic time-outs, and ruling out alternatives. These tools are designed to disrupt heuristic decision-making and encourage users to critically assess AI recommendations. By integrating these strategies, decision-makers can harness AI's capabilities while ensuring that human insights and analytical thinking remain central. Preventing AI overreliance could even aide the learning process of AI-models, allowing them to maintain the tendency to improve over time. Understanding the psychological factors at play emphasizes the critical role of human cognition in shaping effective AI applications. Ultimately, the goal is to empower humans to make informed decisions alongside AI, leading to smarter, more reliable outcomes across a wide range of decision-making scenarios.

 

Get in touch with CGI expert Koen van Kan (koen.van.kan@cgi.com) to learn more about the intersection of Psychology and Artificial Intelligence, and how this fits in CGI’s responsible AI framework.

References

[1]           CGI. (2024). Using AI to review CT scans and detect brain hemorrhages. https://www.cgi.com/en/case-study/health/artificial-intelligence/using-ai-review-ct-scans-and-detect-brain-hemorrhages.

[2]           CGI. (2024). CGI to develop AI Network for Defence and Security Accelerator Intelligent Ship Phase 2. https://www.cgi.com/uk/en-gb/news/defence/cgi-develop-ai-network-defence-and-security-accelerator-intelligent-ship-phase-2.

[3]           Bansal, G., Wu, T., Zhou, J., Fok, R., Nushi, B., Kamar, E., ... & Weld, D. (2021, May). Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI conference on human factors in computing systems (pp. 1-16).

[4]           European Union. (2024). The AI Act. Human Oversight. https://ai-act-law.eu/article/14/.

[5]           Gartner. (2021, June 2). Gartner predicts the future of AI technologies. Gartner. https://www.gartner.com/smarterwithgartner/gartner-predicts-the-future-of-ai-technologies.

[6]           Lambe, K. A., O’Reilly, G., Kelly, B. D., & Curristan, S. (2016). Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ Quality & Safety, 25(10), 808–820. doi:10.1136/bmjqs-2015-004417. 

[7]           Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-computer Interaction, 5(CSCW1), 1-21.

Over de auteur

Koen van Kan

Koen van Kan

Conversational AI Consultant bij CGI Nederland

Koen van Kan is een Artificial Intelligence (AI) consultant bij CGI Nederland, gespecialiseerd in het ontwerpen en implementeren van interactieve AI-oplossingen die waarde toevoegen aan de eindgebruikerservaring. Hij is gepassioneerd over het ontwikkelen van ethisch verantwoorde AI-systemen die zowel technologisch geavanceerd als mensgericht zijn.