Human Factors Cognitive Ergonomics Human-AI Interaction

Automation Bias

In brief: Automation bias is the tendency to follow the recommendations of an automated system without independent verification, even when contradictory cues are available. Documented for 30 years in aviation and medicine, it affects novices and experts alike, resists training, and worsens with system reliability. This is no longer just a theoretical concept: it is the primary risk of AI in clinical practice.

Why this concept matters

AI tools are entering clinical practice: suicide risk scoring, diagnostic aids, therapeutic outcome prediction, speech analysis. We hear a lot about their performance, but rarely about the effect they have on the judgment of the clinician using them.

Automation bias tells us this: the more reliable an AI tool is, the more vulnerable it makes you when it errs. After months of flawless use, your diagnostic vigilance erodes—not through incompetence, but through a normal, well-documented cognitive mechanism. Knowing this mechanism is the first step to protecting yourself.

The mistake to avoid: believing expertise protects you

The natural intuition is to think: "I'm an experienced clinician, I won't blindly follow a machine." That is precisely what physicians and radiologists thought—all of whom showed the bias in controlled studies.

The data is clear, including in the medical domain. In studies on Clinical Decision Support Systems (CDSS), incorrect advice led physicians to change an initially correct diagnosis in 6 to 11% of cases (Mosier & Manzey, 2019). In radiology, experienced radiologists' cancer detection rate dropped from 46% to 21% when a faulty automated aid failed to flag lesions (Alberdi et al., 2004). And in a study on electronic prescribing systems, omission errors increased by 28.7% and commission errors by 56.9% when the system produced false alerts (Lyell et al., 2017).

Expertise does not protect against automation bias. Neither does training. Nor explicit instructions to "always verify." The mechanism runs deeper than conscious intention.

The 5 mechanisms of automation bias

1. Omission errors: when the AI says nothing, you see nothing

If a suicide risk scoring tool displays "low risk," you are less likely to pick up warning signals than without the tool. It's not that you're paying less attention—it's that your attention reallocates to other tasks (the relationship, notes, planning). The AI takes over monitoring, and your cognitive system frees up resources elsewhere. When the AI errs, the safety net has vanished.

2. Commission errors: following an incorrect recommendation

Even more troubling: when the AI gives an incorrect recommendation, clinicians follow it even in the presence of contradictory information. In studies on clinical decision support systems, 6 to 11% of physicians changed an initially correct diagnosis after consulting a tool that gave them incorrect advice. In other words, they knew—and the tool made them doubt enough to change their mind. In psychotherapy, a tool suggesting a diagnosis could lead you to unconsciously look for symptoms that confirm it, rather than testing the hypothesis.

3. "Phantom memory": when AI fabricates your memories

Perhaps the most alarming finding for clinical practice. In an aviation study, 67% of pilots who followed a false engine fire alert reported having seen confirmations on other instruments—confirmations that did not exist (Mosier et al., 1998). Their memory had fabricated recollections consistent with the AI's recommendation. This phenomenon, called phantom memory, has not yet been studied in clinical settings—but the question arises: could a psychologist "remember" symptoms confirming an erroneous algorithmic diagnosis?

4. "Learned carelessness": the progressive erosion of vigilance

The bias doesn't set in overnight. It follows a positive feedback loop: the AI works correctly → your trust increases → you verify less → the AI keeps working → your trust increases further. Parasuraman and Manzey (2010) call this mechanism learned carelessness. After months of incident-free use, vigilance reaches its minimum—precisely when an AI error would have the most serious consequences.

5. The reliability paradox

The more reliable an AI system is, the more dangerous it becomes when it fails. Counter-intuitive, but logical: an AI that often errs keeps you vigilant (you verify). An AI that rarely errs lulls you (you delegate). Studies show that automation with variable reliability eliminates complacency—but at the cost of trust. It's a dilemma without a simple solution.

Three concepts, three levels

Automation bias sits at the intersection of two concepts you may already know. Together, they form a coherent triptych.

Algorithm appreciation — the attitude

The tendency to prefer algorithmic recommendations. It's a disposition, not necessarily a problem. It becomes problematic when it transforms into...

Automation bias — the behavior

Following the recommendation without verification, even when contradictory cues are present. This is no longer a preference but a suspension of critical judgment. This is where clinical risk lies.

Algorithm aversion — the reaction

After observing an AI error, the disproportionate rejection. Aversion is often the rebound from bias: swinging from blind trust to total rejection, never reaching calibrated trust.

The clinical goal is neither appreciation nor aversion, but calibrated trust: knowing when the AI is reliable, in which context, and for what type of decision.

Clinical vignette

Dr. Martin, a psychologist in a community mental health center, has been using a discourse-analysis-based diagnostic aid for 6 months. The tool produces diagnostic hypotheses with confidence scores. So far, its suggestions have proven accurate in the vast majority of cases.

A new patient, Lucas, 28, presents with concentration difficulties at work. The tool analyzes the initial interview and suggests "Attention Deficit Disorder (probability 78%)." Dr. Martin notes that Lucas is indeed restless in session, struggles to maintain the thread of conversation, and reports longstanding academic difficulties.

However, reviewing his notes after the session, Dr. Martin realizes he failed to explore several alternative paths: Lucas had mentioned an intense marital conflict, sleep disturbances for 3 months, and a tripling of coffee consumption. All these elements were in his notes, but he hadn't "seen" them as competing diagnostic hypotheses.

Reading through automation bias: Dr. Martin experienced a classic omission error. The tool directed his attention toward one hypothesis (ADHD), and his cognitive system stopped searching for alternatives. He didn't "forget" the other cues—he processed them with reduced attention, consistent with the algorithmic suggestion already received. Six months of incident-free use had installed learned carelessness: he no longer checked the tool's suggestions as rigorously as he did at first.

In practice for the clinician

  • Form your hypothesis before consulting the tool: note your clinical impression before looking at the AI suggestion. This creates an independent anchor that better resists algorithmic confirmation bias.
  • Practice "anti-AI differential diagnosis": when the tool suggests a diagnosis, actively seek the two most plausible alternative hypotheses. This is the clinical equivalent of the variable-priority training that reduces complacency in aviation studies.
  • Alternate sessions with and without the tool: human factors research shows that variable-reliability automation reduces complacency. Using the tool every other session (or every third) maintains higher clinical vigilance than systematic use.
  • Keep a divergence journal: note every time your clinical impression diverges from the AI suggestion. If you never note a divergence after 3 months, that's probably a sign the bias has set in—not that the AI is always right.

Points of caution

Automation bias does NOT mean that:

  • AI tools should not be used in clinical practice—benefits are real when the system works correctly
  • Clinicians are incompetent—the bias affects all experts, in all domains; it's a normal cognitive mechanism
  • The solution is to "pay more attention"—vigilance instructions have little effect; structural strategies are needed

Research limitations:

  • Ecological validity: most studies use laboratory simulations (30-120 min) with high failure rates (12-50%). Transposition to real clinical practice (months of use, very low error rates) remains to be documented.
  • Cognitivist framework: human factors research treats the operator as an information-processing system. The relational, ethical, and institutional dimensions of clinical practice are not accounted for.
  • Emerging field: studies on automation bias in psychotherapy specifically are virtually non-existent. Data comes from aviation, air traffic control, and radiology.

Further reading

  • Key synthesis: Parasuraman, R. & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381-410.
  • Foundational article: Mosier, K. L. & Skitka, L. J. (1996). Human Decision Makers and Automated Decision Aids: Made for Each Other? In Automation and Human Performance, 201-220.
  • Clinical data (radiology): Alberdi, E. et al. (2004). Effects of Incorrect Computer-Aided Detection Output on Human Decision-Making in Mammography. Academic Radiology, 11(8), 909-918.
  • Trust in automation: Lee, J. D. & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), 50-80.
All concepts

Last updated: March 2026